Service
Pod IP只是集群中可见的虚拟IP,无法访问外部。 Pod IP会随着Pod当ReplicaSet对Pod动态伸缩时,Pod IP随时随地都可能发生变化,这给我们访问这项服务带来了困难。 因此,Kubernetes中的Service对象是解决上述问题的对象核心关键。
Service的类型
在Serivce定义时,我们需要指定spec.type该字段有四个选项: ● ClusterIP:默认值。给这个Service分配一个Cluster IP,它是Kubernetes虚拟系统自动分配IP,因此,只能访问集群内部。 ● NodePort:将Service通过指定的Node上的端口暴露给外部。通过这种方法访问任何一个NodeIP:nodePort都将路由到ClusterIP, 从而成功获得服务。默认范围30000-32767。 ● LoadBalancer:在 NodePort 在此基础上,借助 cloud provider 创建外部负载平衡器并将请求转发给 <NodeIP>:NodePort。 该模式只能在云服务器中使用(AWS等等使用。 ● ExternalName:将服务通过 DNS CNAME 将记录转发到指定域名(通过记录) spec.externlName 设定), 需要 kube-dns 版本在 1.7 以上。 应用场景: 集群内部访问:ClusterIP 集群外部访问: 外部集群访问:ExternalName 外部访问集群: 私有云:NodePort 公有云:LoadBalancer
Service的代理模式
● userspace 代理模式 ● iptables 代理模式 ● ipvs 代理模式:
ipvs模式,kube-proxy 会监视 Kubernetes Service 对象和 Endpoints ,调用 netlink 相应地创建接口 ipvs 定期和规则 Kubernetes Service 对象和Endpoints 对象同步 ipvs 规则,以确保 ipvs 状态与期望一 致。当访问服务时,流量将被重定向其中一个后端 Pod 。当访问服务时,流量将被重定向其中一个后端 Pod 。与 iptables 类似,ipvs 于 net?lter 的 hook 但哈希表被用作底层数据结构,并在核心空间中工作。这意 味着 ipvs 在同步代理规则时,可以更快地重定向流量,性能更好。此外,ipvs 例如: rr :轮询调度 lc :最小连接数 dh :目标哈希 sh :源哈希 sed :最短期望延迟 nq : 不排队调度
ClusterIp
apiVersion: v1 kind: Service metadata: name: myapp namespace: default spec: type: ClusterIP selector: app: myapp release: stabel ports: - name: http port: 80 targetPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deploy namespace: default spec: replicas: 2 selector: matchLabels: app: myapp release: stabel template: metadata: labels: app: myapp release: stabel env: test
spec:
containers:
- name: myapp
image: mynginx:v1
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
通过集群分配给service IP访问
$ kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 241.254.0.1 <none> 443/TCP 25d <none>
myapp ClusterIP 241.254.228.65 <none> 80/TCP 36h app=myapp,release=stabel
$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-deploy-6d4bc5cc76-7n5tn 1/1 Running 2 32h 241.255.0.122 master <none> <none>
myapp-deploy-6d4bc5cc76-fprjr 1/1 Running 3 32h 241.255.3.41 node1 <none> <none>
$ ipvsadm -Ln
...
TCP 241.254.228.65:80 rr
-> 241.255.0.122:80 Masq 1 0 1
-> 241.255.3.41:80 Masq 1 0 1
....
$ curl 241.254.228.65/hostname.html
myapp-deploy-6d4bc5cc76-fprjr
$ curl 241.254.228.65/hostname.html
myapp-deploy-6d4bc5cc76-7n5tn
通过集群分配给service域名访问
在k8s集群中每个Service都会有一个DNS名称,Services 包括两种,一种是普通的service服务,一种是headless services。 普通的service 的DNS会解析到一个服务的集群IP,headless services 则会直接解析到所包含的pod的IP。
将svc域名解析到coreDNS服务器,集群内部DNS解析可参考kubernetes集群内部DNS解析原理这篇文章
$ yum -y install bind-utils
#使用dig命令测试svc name是否解析到DNS服务器
$ dig -t -A myapp.default.svc.cluster.local. @241.254.0.10
......
;; ANSWER SECTION:
myapp.default.svc.cluster.local. 30 IN A 241.254.228.65
....
$ kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 241.254.0.10 <none> 53/UDP,53/TCP,9153/TCP 25d
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 241.254.0.1 <none> 443/TCP 25d
myapp ClusterIP 241.254.228.65 <none> 80/TCP 2d1h
#可使用两种访问访问
#1、在宿主机/etc/resolv.conf 添加nameserver指向kube-dns
#cat /etc/resolv.conf
#search localdomain
#nameserver 192.168.234.2
#nameserver 241.254.0.10
$ curl myapp.default.svc.cluster.local./hostname.html
myapp-deploy-6d4bc5cc76-7n5tn
$ curl myapp.default.svc.cluster.local./hostname.html
myapp-deploy-6d4bc5cc76-fprjr
#2.由于集群内pod解析默认的是指向kube-dns,所以也可随意在某个pod内访问
#由于pod内部的resolv.conf配置search域
#search域的作用就是将域名进行后缀匹配,例如myapp myapp.default.svc.cluster.local->myapp.svc.cluster.local
#->myapp.cluster.local直到匹配为止
$ kubectl exec mypod -- curl myapp.default.svc.cluster.local./hostname.html
$ kubectl exec mypod -- curl myapp/hostname.html
$ kubectl exec mypod -- curl myapp.default/hostname.html
$ kubectl exec mypod -- curl myapp.default.svc/hostname.html
#都可以访问到结果,只是效率会降低,因为多匹配一次dns查询
如果想要ping通service 域名,就必须让kube-proxy以ipvs代理模式运行。原因如下:
1、serviceIP是虚拟的地址,没有分配给任何网络接口,当数据包传输时不会把这个IP作为数据包的源IP或目的IP。
2、kube-proxy在iptables模式下,这个IP没有被设置在任何的网络设备上,ping这个IP的时候,没有任何的网络协议栈会回应这个ping请
求。在iptables模式时,clusterIp会在iptables的PREROUTING链里面用于nat转换规则中。
3、在ipvs模式下,所有的clusterIp会被设置在node上的kube-ipvs0的虚拟网卡上,这个时候去ping是可以通的
NodePort
apiVersion: v1
kind: Service
metadata:
name: myapp-nodeport
namespace: default
spec:
type: NodePort
selector:
app: myapp
release: stabel
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30001
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 241.254.0.1 <none> 443/TCP 176m
myapp-nodeport NodePort 241.254.6.59 <none> 80:30001/TCP 4m10s
#通过集群内部ip访问
$ curl 241.254.6.59/hostname.html
myapp-deploy-6d4bc5cc76-lfbbh
$ curl 241.254.6.59/hostname.html
myapp-deploy2-7bddc44f9c-srgj8
#通过宿主机ip访问
$ curl 192.168.116.128:30001/hostname.html
myapp-deploy-6d4bc5cc76-hkdlr
$ curl 192.168.116.128:30001/hostname.html
myapp-deploy-6d4bc5cc76-lfbbh
ExternalName
kind: Service
apiVersion: v1
metadata:
name: myservice
namespace: default
spec:
type: ExternalName
externalName: www.baidu.com
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 241.254.0.1 <none> 443/TCP 3h43m
myservice ExternalName <none> www.baidu.com <none> 6m33s
由于CoreDNS会监测apiserver,当使用kubectl命令向apiServer提交service请求时,CoreDNS会为每一个Service创建DNS记录用于域名解析,当创建type类型为ExternalName,CoreDNS会将<服务名>.<命名空间>.svc.cluster.local**解析到externalName的值,比如上例的 myservice.default.svc.cluster.local会解析到www.baidu.com
$ nslookup myservice.default.svc.cluster.local
Server: 241.254.0.10
Address: 241.254.0.10#53
myservice.default.svc.cluster.local canonical name = www.baidu.com.
www.baidu.com canonical name = www.a.shifen.com.
Name: www.a.shifen.com
Address: 180.101.49.12
Name: www.a.shifen.com
Address: 180.101.49.11
Ingress
组成
ingress controller
将新加入的Ingress转化成Nginx的配置文件并使之生效
ingress服务
将Nginx的配置抽象成一个Ingress对象,每添加一个新的服务只需写一个新的Ingress的yaml文件即可
工作原理
1.ingress controller通过和kubernetes api交互,动态的去感知集群中ingress规则变化, 2.然后读取它,按照
自定义的规则,规则就是写明了哪个域名对应哪个service,生成一段nginx配置, 3.再写到nginx-ingress-control的pod里,这个Ingress controller的pod里运行着一个Nginx服务,控制器会把生成的nginx配置 写入/etc/nginx.conf文件中, 4.然后reload一下使配置生效。以此达到域名分配置和动态更新的问题。
访问原理
Ingress-nginx与Service之间通过匹配serviceName绑定,Service相当于给外界提供访问pod的端口,如果不采用ingress-nginx,
那么是由kube-proxy实现4层代理,只能通过ip访问,所以由NodePort的类型将端口暴露。如果采用ingress-nginx相当于在Service
前面加了一层代理,访问会先经过ingress-nginx,这样Service的类型就不需要变更为NodePort
测试部署
ingress-nginx.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
proxy-body-size: 100M
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
#kind: Deployment
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
# replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: registry:5000/library/k8s/nginx-ingress-controller:0.25.0_x
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
hostPort: 80
protocol: TCP
- name: https
containerPort: 443
hostPort: 443
protocol: TCP
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
---
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
rules:
- host: www.aaa.com
http:
paths:
- backend:
serviceName: myapp
servicePort: 80
path: /
#serviceName对应service服务名
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 241.254.0.1 <none> 443/TCP 40m
myapp ClusterIP 241.254.50.93 <none> 80/TCP 37m
$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress <none> www.aaa.com 80 25m
#做宿主机host文件
$ cat /etc/hosts
192.168.234.134 www.aaa.com
$ curl www.aaa.com/hostname.html
myapp-deploy-6d4bc5cc76-g7tqw
$ curl www.aaa.com/hostname.html
myapp-deploy-6d4bc5cc76-jg4xf
https部署测试
#自签发证书,可以使用下面的语句,也可以阅读自制https的博客
$ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/O=aaa/CN=*.aaa.com"
$ ll
总用量 8
-rw-r--r-- 1 root root 1131 9月 19 16:30 tls.crt
-rw-r--r-- 1 root root 1704 9月 19 16:30 tls.key
#导入证书文件到k8s secret(注意这边的nginx-test要和下面ingress.yaml文件中tls相对应)
$ kubectl create secret tls nginx-test --cert=tls.crt --key=tls.key
$ kubectl get secret
NAME TYPE DATA AGE
nginx-test kubernetes.io/tls 2 53m
#创建https的ingrss.yaml文件
$ cat ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false" #是否开启强制跳转,true/false,默认开启
spec:
tls:
- hosts:
- www.aaa.com
secretName: nginx-test
#多域名配置
#- hosts:
# - www.bbb.com
# secretName: nginx-bbb
rules:
- host: www.aaa.com
http:
paths:
- backend:
serviceName: myapp
servicePort: 80
path: /
#- host: www.bbb.com
# http:
# paths:
# - backend:
# serviceName: myapp3
# servicePort: 80
# path: /
Nginx重写
名称 | 描述 | 值 |
---|---|---|
nginx.ingress.kubernetes.io/rewrite-target | 必须重定向流量的目标 URI | 串 |
nginx.ingress.kubernetes.io/ssl-redirect | 指示位置部分是否只能通过 SSL 访问(当 Ingress 包含证书时默认为 True) | 布尔 |
nginx.ingress.kubernetes.io/force-ssl-redirect | 即使 Ingress 未启用 TLS,也强制重定向到 HTTPS | 布尔 |
nginx.ingress.kubernetes.io/app-root | 定义控制器在/ 上下文中必须重定向的应用程序根 |
串 |
nginx.ingress.kubernetes.io/use-regex | 指示在 Ingress 上定义的路径是否使用正则表达式 | 布尔 |
Pod与Service关联
标签关联,spec.template.metadata.labels为pod打上名为mediaplus-main的标签,Service的spec.selector需要与spec.template.metadata.labels值一致
apiVersion: v1
kind: Service
metadata:
name: mediaplus-main
labels:
app: mediaplus-main
spec:
..............
selector:
app: mediaplus-main
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mediaplus-main
spec:
replicas: 1
selector:
matchLabels:
app: mediaplus-main
template:
metadata:
labels:
app: mediaplus-main
spec:
containers:
- name: mediaplus-main
Service与Ingress关联
apiVersion: v1 kind: Service metadata: name: mediaplus-main labels: app: mediaplus-main spec: