部署NGINX OSS KIC

发布一下 0 0

前言

Nginx 官方的Ingress比K8S社区版本自带的Ingress在可配置性、与原生官方Nginx的团队的支持更加紧密,在容器化浪潮的今天,不失为一种选择,当然也有其它的方案比如Traefix,APISIX等;就看大家怎样取舍。

Tip:本文章摘自“DevOps 社区线上公益课容器化应用的动态发布实验手册”

NGINX KIC 架构

部署NGINX OSS KIC

NGINX KIC安装指南

部署NGINX OSS KIC

部署NGINX OSS KIC

部署NGINX OSS KIC

部署NGINX OSS KIC

部署NGINX OSS KIC

部署NGINX OSS KIC

部署NGINX OSS KIC

部署NGINX OSS KIC

部署NGINX OSS KIC

部署NGINX OSS KIC

部署NGINX OSS KIC

部署NGINX OSS KIC

部署NGINX OSS KIC

部署NGINX OSS KIC

部署NGINX OSS KIC

部署NGINX OSS KIC

本实验提供了一个单节点的K8S集群,Master和Worker Node都在一台Ubuntu服务器上,相关系统版本如下:

  • 系统版本:Ubuntu 18.04.4 LTS
  • K8S版本:v1.25.0
  • NGINX KIC版本:2.3.0
  • CNI: Flannel

ssh登录服务器后,我们首先确认一下K8S的状态。

root@edu_vlovev_cn:/root# kubectl get nodeNAME     STATUS   ROLES           AGE    VERSIONubuntu   Ready    control-plane   153m   v1.25.0

然后我们开始部署NGINX K8S Ingress Controller,SA、RBAC等前置工作均已做好,只需要完成KIC的Deployment即可。进入/root/kic-oss-lab/0-deployment文件夹,创建nginx-ingress-hostnetwork.yaml文件。

apiVersion: apps/v1kind: Deploymentmetadata:  name: nginx-ingress  namespace: nginx-ingressspec:  replicas: 1  selector:    matchLabels:      app: nginx-ingress  template:    metadata:      labels:        app: nginx-ingress     #annotations:       #prometheus.io/scrape: "true"       #prometheus.io/port: "9113"       #prometheus.io/scheme: http    spec:      serviceAccountName: nginx-ingress      hostNetwork: true      containers:      - image: nginx/nginx-ingress:2.3.0        imagePullPolicy: IfNotPresent        name: nginx-ingress        ports:        - name: http          containerPort: 80        - name: https          containerPort: 443        - name: tcp-8001          containerPort: 8001        - name: tcp-8002          containerPort: 8002        - name: readiness-port          containerPort: 8081        - name: prometheus          containerPort: 9113        readinessProbe:          httpGet:            path: /nginx-ready            port: readiness-port          periodSeconds: 1        resources:          requests:            cpu: "100m"            memory: "128Mi"         #limits:         #  cpu: "1"         #  memory: "1Gi"        securityContext:          allowPrivilegeEscalation: true          runAsUser: 101 #nginx          runAsNonRoot: true          capabilities:            drop:            - ALL            add:            - NET_BIND_SERVICE        env:        - name: POD_NAMESPACE          valueFrom:            fieldRef:              fieldPath: metadata.namespace        - name: POD_NAME          valueFrom:            fieldRef:              fieldPath: metadata.name        args:          - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config          - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret         #- -enable-cert-manager         #- -enable-external-dns         #- -v=3 # Enables extensive logging. Useful for troubleshooting.         #- -report-ingress-status         #- -external-service=nginx-ingress         #- -enable-prometheus-metrics          - -global-configuration=$(POD_NAMESPACE)/nginx-configuration

应用以上文件, 并确认pod已正常运行。

root@edu_vlovev_cn:/root/kic-oss-lab/0-deployment# kubectl apply -f nginx-ingress-hostnetwork.yaml deployment.apps/nginx-ingress createdroot@edu_vlovev_cn:/root# kubectl get pod -n nginx-ingress NAME                             READY   STATUS    RESTARTS   AGEnginx-ingress-59dfd5855c-nknnr   1/1     Running   0          4m9s

使用curl命令访问本机的80和443端口,如果看到NGINX的页面,即表示KIC部署成功,可以继续下面的实验。root@edu_vlovev_cn:/root# curl 127.0.0.1

root@edu_vlovev_cn:/root# curl 127.0.0.1<html><head><title>404 Not Found</title></head><body><center><h1>404 Not Found</h1></center><hr><center>nginx/1.23.0</center></body></html>root@ubuntu:/root# curl -k https://127.0.0.1<html><head><title>404 Not Found</title></head><body><center><h1>404 Not Found</h1></center><hr><center>nginx/1.23.0</center></body></html>

2.Ingress资源配置

进入/root/kic-oss-lab/1-ingress文件夹,依次应用以下文件,创建应用和TLS证书密钥。

root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/1-ingress # kubectl apply -f cafe.yaml -f cafe-secret.yaml deployment.apps/coffee createdservice/coffee-svc createddeployment.apps/tea createdservice/tea-svc createdsecret/cafe-secret createdroot@k8s-calico-master:~/kic-oss-lab/1-ingress #

可以使用curl命令验证后端应用。

root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/1-ingress # kubectl get serviceNAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGEcoffee-svc   ClusterIP   10.98.91.20    <none>        80/TCP    78skubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   104dtea-svc      ClusterIP   10.96.160.45   <none>        80/TCP    78sroot@k8s-calico-master:~/kic-oss-lab/1-ingress # curl 10.98.91.20Server address: 10.244.195.105:80Server name: coffee-87cf76b96-7c2wxDate: 22/Aug/2022:06:58:55 +0000URI: /Request ID: 6cb10905d97c76ed137622a00f45c07broot@k8s-calico-master:~/kic-oss-lab/1-ingress # curl 10.96.160.45Server address: 10.244.195.107:80Server name: tea-7b475f7bcb-sgtmmDate: 22/Aug/2022:06:59:28 +0000URI: /Request ID: 05bae1c95d02a9d000ef9ce58aad255d

创建cafe-ingress.yaml文件。

apiVersion: networking.k8s.io/v1kind: Ingressmetadata:  name: cafe-ingressspec:  tls:  - hosts:    - cafe.example.com    secretName: cafe-secret  rules:  - host: cafe.example.com    http:      paths:      - path: /tea        pathType: Prefix        backend:          service:            name: tea-svc            port:              number: 80      - path: /coffee        pathType: Prefix        backend:          service:            name: coffee-svc            port:              number: 80

应用以上文件。

root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/1-ingress # kubectl apply -f cafe-ingress.yaml ingress.networking.k8s.io/cafe-ingress createdroot@k8s-calico-master:~/kic-oss-lab/1-ingress # kubectl get ingressNAME           CLASS   HOSTS              ADDRESS   PORTS     AGEcafe-ingress   nginx   cafe.example.com             80, 443   11s

使用curl命令访问应用,hosts文件已经编辑好,可以直接访问域名。如果访问http,会被自动跳转至https,也可以直接访问https。

root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/1-ingress # curl -i http://cafe.example.com/teaHTTP/1.1 301 Moved PermanentlyServer: nginx/1.23.0Date: Mon, 22 Aug 2022 07:11:45 GMTContent-Type: text/htmlContent-Length: 169Connection: keep-aliveLocation: https://cafe.example.com:443/tea<html><head><title>301 Moved Permanently</title></head><body><center><h1>301 Moved Permanently</h1></center><hr><center>nginx/1.23.0</center></body></html>root@k8s-calico-master:~/kic-oss-lab/1-ingress # curl -L -k http://cafe.example.com/teaServer address: 10.244.195.107:80Server name: tea-7b475f7bcb-sgtmmDate: 22/Aug/2022:07:12:01 +0000URI: /teaRequest ID: bd6f0a77d948e668d5aca32bc4426584root@k8s-calico-master:~/kic-oss-lab/1-ingress # curl -k https://cafe.example.com/coffeeServer address: 10.244.195.105:80Server name: coffee-87cf76b96-7c2wxDate: 22/Aug/2022:07:12:18 +0000URI: /coffeeRequest ID: e3f2e955672a37fca5268cbf5f526960

多次访问同一URL,通过Server name可以看到在两个pod之间做负载均衡。

修改以上Ingress文件,加入一个annotation,然后kubectl apply

apiVersion: networking.k8s.io/v1kind: Ingressmetadata:  name: cafe-ingress  annotations:    nginx.org/rewrites: "serviceName=tea-svc rewrite=/;serviceName=coffee-svc rewrite=/beans"spec:  tls:  - hosts:    - cafe.example.com    secretName: cafe-secret  rules:  - host: cafe.example.com    http:      paths:      - path: /tea        pathType: Prefix        backend:          service:            name: tea-svc            port:              number: 80      - path: /coffee        pathType: Prefix        backend:          service:            name: coffee-svc            port:              number: 80

再次用curl命令访问,可以看到uri被改写了。

root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/1-ingress # curl -k https://cafe.example.com/teaServer address: 10.244.195.106:80Server name: tea-7b475f7bcb-sql44Date: 22/Aug/2022:07:19:06 +0000URI: /Request ID: 091059a71d4c4ddfd36dbda30747bba5root@k8s-calico-master:~/kic-oss-lab/1-ingress # curl -k https://cafe.example.com/coffeeServer address: 10.244.195.104:80Server name: coffee-87cf76b96-kb9hzDate: 22/Aug/2022:07:19:12 +0000URI: /beansRequest ID: 0c1755a924c89d213156727a3ab641df

删除所有资源,还原实验环境。

3.Virtual Server和Virtual Server Route资源配置

进入/root/kic-oss-lab/2-vs文件夹,应用文件夹内的所有文件,创建Namespaces和不同NS下的应用。

root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/2-vs # kubectl apply -f namespaces.yaml -f coffee.yaml -f tea.yaml -f cafe-secret.yaml namespace/cafe creatednamespace/tea creatednamespace/coffee createddeployment.apps/coffee createdservice/coffee-svc createddeployment.apps/tea-v1 createdservice/tea-v1-svc createddeployment.apps/tea-v2 createdservice/tea-v2-svc createdsecret/cafe-secret created

依次创建和应用VirtualServer和两个VirtualServerRoute资源,首先是cafe-virtual-server.yaml

apiVersion: k8s.nginx.org/v1kind: VirtualServermetadata:  name: cafe  namespace: cafespec:  host: cafe.example.com  tls:    secret: cafe-secret  routes:  - path: /tea    route: tea/tea  - path: /coffee    route: coffee/coffee

然后是coffee-virtual-server-route.yaml

apiVersion: k8s.nginx.org/v1kind: VirtualServerRoutemetadata:  name: coffee   namespace: coffeespec:  host: cafe.example.com  upstreams:  - name: coffee     service: coffee-svc    port: 80  subroutes:  - path: /coffee    action:      proxy:        upstream: coffee        requestHeaders:          pass: true          set:          - name: My-Header            value: F5-Best        responseHeaders:          add:          - name: My-Header            value: ${http_user_agent}          - name: IC-Nginx-Version            value: ${nginx_version}            always: true        rewritePath: /coffee/rewrite

最后是tea-virtual-server-route.yaml

apiVersion: k8s.nginx.org/v1kind: VirtualServerRoutemetadata:  name: tea   namespace: teaspec:  host: cafe.example.com  upstreams:  - name: tea-v1    service: tea-v1-svc    port: 80  - name: tea-v2     service: tea-v2-svc    port: 80  subroutes:  - path: /tea    matches:    - conditions:      - cookie: version        value: v2      action:        pass: tea-v2    action:      pass: tea-v1

我们首先观察一下应用服务,可以看到coffee和tea服务是分别在不同的Namespace下的,而tea还分了v1和v2版本。

部署NGINX OSS KIC

首先使用curl命令来访问coffee应用,除了可以正常访问应用外,可以看到我们通过KIC插入的Header,另外就是uri被改写了。

root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/2-vs # curl -k -i https://cafe.example.com/coffeeHTTP/1.1 200 OKServer: nginx/1.23.0Date: Mon, 22 Aug 2022 07:38:52 GMTContent-Type: text/plainContent-Length: 172Connection: keep-aliveExpires: Mon, 22 Aug 2022 07:38:51 GMTCache-Control: no-cacheMy-Header: curl/7.58.0IC-Nginx-Version: 1.23.0Server address: 10.244.195.109:8080Server name: coffee-7b9b4bbd99-zzc7vDate: 22/Aug/2022:07:38:52 +0000URI: /coffee/rewriteRequest ID: 330ac344e70abff73852bc00d6d563a2

接下来访问tea应用,可以看到如果我们带上version=v2的cookie,就会访问到v2版本的tea应用,而不带cookie或者cookie值不等于v2都会访问到v1版本的tea应用。

root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/2-vs # curl -k https://cafe.example.com/teaServer address: 10.244.195.108:8080Server name: tea-v1-99bf9564c-lkc2gDate: 22/Aug/2022:07:41:40 +0000URI: /teaRequest ID: 4ce5951672e31777edd0ef325fed08c9root@k8s-calico-master:~/kic-oss-lab/2-vs # curl -k --cookie "version=v2" https://cafe.example.com/teaServer address: 10.244.195.110:8080Server name: tea-v2-6c9df94f67-2d7f7Date: 22/Aug/2022:07:42:02 +0000URI: /teaRequest ID: b376f7bc50a0536e6f73df37ae7bd767

删除刚才应用的所有资源,还原实验环境。

4.TransportServer资源配置

进入/root/kic-oss-lab/3-ts文件夹,首先应用cafe.yaml创建后端应用。新建globalconfiguration-listener.yaml文件,为KIC创建新的Listener。

apiVersion: k8s.nginx.org/v1alpha1kind: GlobalConfiguration metadata:  name: nginx-configuration  namespace: nginx-ingressspec:  listeners:  - name: tcp-8001    port: 8001    protocol: TCP  - name: tcp-8002    port: 8002    protocol: TCP

应用以上文件后,再分别创建和应用两个TranportServer资源,首先是ts-coffee.yaml

apiVersion: k8s.nginx.org/v1alpha1kind: TransportServermetadata:  name: coffeespec:  listener:    name: tcp-8001    protocol: TCP  upstreams:  - name: coffee    service: coffee-svc    port: 80  action:    pass: coffee

然后是ts-tea.yaml

apiVersion: k8s.nginx.org/v1alpha1kind: TransportServermetadata:  name: teaspec:  listener:    name: tcp-8002    protocol: TCP  upstreams:  - name: tea    service: tea-svc    port: 80  action:    pass: tea

以上资源都应用完成后,就可以通过IP:Port的方式访问tea和coffee应用了。

root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/3-ts # curl http://127.0.0.1:8001Server address: 10.244.195.112:80Server name: coffee-87cf76b96-9dnl8Date: 22/Aug/2022:07:53:38 +0000URI: /Request ID: 121cbef18ecabaeb2e2c15943d8745b2root@k8s-calico-master:~/kic-oss-lab/3-ts # curl http://127.0.0.1:8002Server address: 10.244.195.113:80Server name: tea-7b475f7bcb-vnj5kDate: 22/Aug/2022:07:53:43 +0000URI: /Request ID: abb07e6f0ffde9e053ba58fd06db44b2

删除相关资源配置,但是请注意globalconfiguration这个资源千万不要删除! 如果要删除Listener,可以应用一个空的globalconfiguration资源。

5.Policy资源

进入/root/kic-oss-lab/4-policy文件夹,先应用webapp.yaml,部署后端应用。然后创建并应用rate-limit.yaml文件。

apiVersion: k8s.nginx.org/v1kind: Policymetadata:  name: rate-limit-policyspec:  rateLimit:    rate: 100r/s    burst: 50    noDelay: true    key: ${binary_remote_addr}    zoneSize: 10M    rejectCode: 444

再创建和应用virtual-server.yaml文件,在VS中调用Policy,不过我们先把Policy部分注释。

apiVersion: k8s.nginx.org/v1kind: VirtualServermetadata:  name: cafespec:  host: cafe.example.com  #policies:  #- name: rate-limit-policy  upstreams:  - name: webapp    service: webapp-svc    port: 80  routes:  - path: /    action:      pass: webapp

在注释掉Policy的情况下,我们通过wrk进行一个简单的压测。

root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/4-policy # wrk -t2 -c10 -d10s http://cafe.example.comRunning 10s test @ http://cafe.example.com  2 threads and 10 connections  Thread Stats   Avg      Stdev     Max   +/- Stdev    Latency     3.99ms    3.59ms  45.17ms   84.71%    Req/Sec     1.52k     0.94k    2.91k    40.50%  30332 requests in 10.01s, 10.76MB readRequests/sec:   3031.01Transfer/sec:      1.08MB

可以看到rps在3000左右(根据环境不同可能数值会有不同,但一定远大于100)。然后我们把VS中的注释删除,让Policy生效。

apiVersion: k8s.nginx.org/v1kind: VirtualServermetadata:  name: cafespec:  host: cafe.example.com  policies:  - name: rate-limit-policy  upstreams:  - name: webapp    service: webapp-svc    port: 80  routes:  - path: /    action:      pass: webapp

再次通过wrk用相同命令进行压测。

root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/4-policy # wrk -t2 -c10 -d10s http://cafe.example.comRunning 10s test @ http://cafe.example.com  2 threads and 10 connections  Thread Stats   Avg      Stdev     Max   +/- Stdev    Latency    18.49ms   23.31ms 212.68ms   83.37%    Req/Sec    52.36     31.83   343.00     86.43%  1045 requests in 10.00s, 379.63KB read  Socket errors: connect 0, read 85782, write 0, timeout 0Requests/sec:    104.48Transfer/sec:     37.96KB

这次可以看到,rps被控制在了100左右。如果我们查看KIC的日志,可以看到rps超出阈值被响应444的日志。

127.0.0.1 - - [22/Aug/2022:09:09:28 +0000] "GET / HTTP/1.1" 444 0 "-" "-" "-"127.0.0.1 - - [22/Aug/2022:09:09:28 +0000] "GET / HTTP/1.1" 444 0 "-" "-" "-"2022/08/22 09:09:28 [error] 128#128: *117569 limiting requests, excess: 50.900 by zone "pol_rl_default_rate-limit-policy_default_cafe", client: 127.0.0.1, server: cafe.example.com, request: "GET / HTTP/1.1", host: "cafe.example.com"127.0.0.1 - - [22/Aug/2022:09:09:28 +0000] "GET / HTTP/1.1" 444 0 "-" "-" "-"2022/08/22 09:09:28 [error] 127#127: *117568 limiting requests, excess: 50.900 by zone "pol_rl_default_rate-limit-policy_default_cafe", client: 127.0.0.1, server: cafe.example.com, request: "GET / HTTP/1.1", host: "cafe.example.com"127.0.0.1 - - [22/Aug/2022:09:09:28 +0000] "GET / HTTP/1.1" 444 0 "-" "-" "-"2022/08/22 09:09:28 [error] 127#127: *117570 limiting requests, excess: 50.900 by zone "pol_rl_default_rate-limit-policy_default_cafe", client: 127.0.0.1, server: cafe.example.com, request: "GET / HTTP/1.1", host: "cafe.example.com"127.0.0.1 - - [22/Aug/2022:09:09:28 +0000] "GET / HTTP/1.1" 444 0 "-" "-" "-"2022/08/22 09:09:28 [error] 127#127: *117571 limiting requests, excess: 50.900 by zone "pol_rl_default_rate-limit-policy_default_cafe", client: 127.0.0.1, server: cafe.example.com, request: "GET / HTTP/1.1", host: "cafe.example.com"

删除所有资源,还原实验环境。

部署NGINX OSS KIC

至此,本次活动中《容器化应用的动态发布》部分的实验就全部完成了!

版权声明:内容来源于互联网和用户投稿 如有侵权请联系删除

本文地址:http://0561fc.cn/174842.html