使用Helm简化K8S应用管理

栏目: 服务器 · 发布时间: 4年前

内容简介:随着第二阶段各平台模块的微服务化改造工作的推进,预计每个namespace会分别增加30-40个工作负载,因此需要维护的yaml文件将急剧扩展,手工维护已不现实。因此使用helm来维护k8s应用被提上议事日程。关于helm的配置文件语法及服务端配置请参考官网手册:

一、背景介绍

在使用纯手工维护yaml文件方式完成内网开发和两套测试环境和现网生成环境的核心微服务pod化之后。发现主要痛点如下:

1、工作负载相关的yaml文件维护量巨大,且易出错。(目前内网共有77个工作负载)

2、研发人员对工作负载配置改动的需求比较频繁,例如修改jvm相关参数,增加initcontainer、修改liveness、readiness探针、亲和性与反亲和性配置等,这类的配置严重同质化。

3、每个namespace都存在环境变量、configmap、rbac、pv\pvc解耦类的配置,如果对应的配置未提前创建,则后续创建的工作负载无法正常工作。

使用Helm简化K8S应用管理 使用Helm简化K8S应用管理

随着第二阶段各平台模块的微服务化改造工作的推进,预计每个namespace会分别增加30-40个工作负载,因此需要维护的yaml文件将急剧扩展,手工维护已不现实。因此使用helm来维护k8s应用被提上议事日程。

关于helm的配置文件语法及服务端配置请参考官网手册: https://helm.sh/docs/

二、需求分析

1、公共配置类
Configmap

每个namespace至少有两个configmap,其中center-config存储了各个环境的 mysqlmongodbredis 等基础公共服务的IP、用户名和密码、连接池配置信息等,我们通过集中配置解耦做到代码编译一次镜像,各个环境都能运行。

hb-lan-server-xml文件实际上就是tomcat下面的server.xml文件,由于早前代码上使用redis做登陆session,需要修改server.xml,因此需要对这个文件做解耦配置,后续创建的工作负载通过外挂的形式替换镜像层中的server.xml文件。

使用Helm简化K8S应用管理 使用Helm简化K8S应用管理

Secret

每个namespace都会存在harborsecret的token,顾名思义就是harbor仓库拉取镜像使用的权限信息,否则创建工作负载的时候会无法拉取镜像。

使用Helm简化K8S应用管理

pv\pvc

每个namespace都有一个配套共享存储,用来统一存放用户上传的附件资源,内网环境我们通过nfs来实现。

使用Helm简化K8S应用管理

rbac相关

因为应用程序会通过curl k8s的master来获取一些配置类的信息,如果没有做相应的rbac授权,访问会出现401,因此需要对每个namespace下的default sa用户做rbac授权。

使用Helm简化K8S应用管理

2、工作负载类

工作负载目前统一为无状态工作负载,主要分为两类,一类需要对外暴露域名和端口,另一类程序内部通过zk进行dubbo调用

使用Helm简化K8S应用管理

三、配置helm与测试

1、公共配置类

# helm create basic
# cd basic
# helm create namespace
# rm -rf charts/namespace/templates/*
# cat charts/namespace/values.yaml 
# Default values for namespace.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

namespace: default
# cat charts/namespace/templates/namespace.yaml 
apiVersion: v1
kind: Namespace
metadata:
 name: {{ .Values.namespace }}
# cat charts/namespace/templates/env-confgimap.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: center-config
  namespace: {{ .Values.namespace }}
  labels:
    app: {{ .Release.Name }}
data:
  ES_CLUSTER_NAME: hbjy6_dev
  ES_CLUSTER_NODES: 192.168.1.19:9500
  ETCD_ENDPOINTS: https://192.168.1.11:2379,https://192.168.1.17:2379,https://192.168.1.23:2379
  ETCD_PASSWORD: ""
  ETCD_SSL_KEY: ""
  ETCD_SSL_KEY_FILE: /mnt/mfs/private/etcd/client-key.pem
  ETCD_SSL_KEYCERT: ""
  ETCD_SSL_KEYCERT_FILE: /mnt/mfs/private/etcd/client.pem
  ETCD_SSL_TRUSTCERT: ""
  ETCD_SSL_TRUSTCERT_FILE: /mnt/mfs/private/etcd/ca.pem
  ETCD_USER: ""
  MONGODB_PASSWORD: xxxx
  MONGODB_REPLICA_SET: 192.168.1.21:37017,192.168.1.15:57017,192.168.1.16:57017
  MONGODB_USER: ROOT
  MYSQL_MASTER_PASSWORD: "xxxxx"
  MYSQL_MASTER_URL: 192.168.1.20:3306
  MYSQL_MASTER_USER: root
  MYSQL_PROXYSQL_PASSWORD: "xxxxx"
  MYSQL_PROXYSQL_URL: 192.168.1.20:1234
  MYSQL_PROXYSQL_USER: root
  REDIS_MASTER_NAME: sigma-server1
  REDIS_PASSWORD: "xxxxx"
  REDIS_SENTINEL1_HOST: 192.168.1.20
  REDIS_SENTINEL1_PORT: "26379"
  REDIS_SENTINEL2_HOST: 192.168.1.21
  REDIS_SENTINEL2_PORT: "26379"
  REDIS_SENTINEL3_HOST: 192.168.1.22
  REDIS_SENTINEL3_PORT: "26379"
  ROCKETMQ_NAMESERVER: 192.168.1.20:9876
  ZK_BUSINESS_ADDRESS: 192.168.1.20:2181,192.168.1.21:2181,192.168.1.22:2181
  ZK_REGISTRY_ADDRESS: 192.168.1.20:2181,192.168.1.21:2181,192.168.1.22:2181

使用Helm简化K8S应用管理

# cat charts/namespace/templates/secret.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: harborsecret
  namespace: {{ .Values.namespace }}
type: kubernetes.io/dockerconfigjson
data:
  .dockerconfigjson:eyJhdXRocyI6eyJoYXJib3IuNTlpZWR1LmNvbSI6eyJ1c2VybmFtZSI6ImFkbWluIiwicGFzc3dvcmQiOiJIYXJib3IxMjM0NSIsImF1dGgiOiJZV1J0YVc0NlNHRnlZbTl5TVRJek5EVT0ifX19
# cat charts/namespace/templates/mfsdata-pv-pvc.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mfsdata-{{ .Values.namespace }}
spec:
  capacity:
    storage: 150Gi 
  accessModes:
  - ReadWriteMany 
  nfs: 
    path: /mnt/mfs
    server: 192.168.1.20
  persistentVolumeReclaimPolicy: Retain
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mfsdata-{{ .Values.namespace }}
  namespace: {{ .Values.namespace }}
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 150Gi
# cat charts/namespace/templates/clusterrole.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: {{ .Release.Name }}-{{ .Values.namespace }}-role
  labels:
    app: {{ .Release.Name }}
rules:
- apiGroups: [""]
  resources: ["*"]
  verbs: ["get","watch","list" ]
- apiGroups: ["storage.k8s.io"]
  resources: ["*"]
  verbs: ["get","watch","list" ]
- apiGroups: ["rbac.authorization.k8s.io"]
  resources: ["*"]
  verbs: ["get","watch","list" ]
- apiGroups: ["batch"]
  resources: ["*"]
  verbs: ["get","watch","list" ]
- apiGroups: ["apps"]
  resources: ["*"]
  verbs: ["get","watch","list" ]
- apiGroups: ["extensions"]
  resources: ["*"]
  verbs: ["get","watch","list" ]
# cat charts/namespace/templates/clusterrolebinding.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: {{ .Release.Name }}-{{ .Values.namespace }}-binding
  labels:
    app: {{ .Release.Name }}
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: {{ .Release.Name }}-{{ .Values.namespace }}-role
subjects:
- kind: ServiceAccount
  name: default
  namespace: {{ .Values.namespace }}

2、工作负载类

# cd basic
# helm create tomcat
# rm -rf charts/namespace/tomcat/*
# cat charts/tomcat/values.yaml   
# Default values for tomcat.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1
version: v1
mfsdata: mfsdata
env:
  -server -Xms1024M -Xmx1024M -XX:MaxMetaspaceSize=320m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/tomcat/jvmdump/
  -Duser.timezone=Asia/Shanghai
  -Drocketmq.client.logRoot=/home/tomcat/logs/rocketmqlog 

image: repository: harbor.59iedu.com
pullPolicy: Always

service:
  type: ClusterIP
  port: 8080
  dubboport: 20880

ingress:
  enabled: false
  annotations: 
    nginx.ingress.kubernetes.io/rewrite-target: /
  path: /
  hosts:
    - www.test1.com
    - www.test2.com

resources: 
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
   limits:
    cpu: 200m
    memory: 0.2Gi
   requests:
    cpu: 100m
    memory: 0.1Gi

nodeSelector: {}

tolerations:
        - key: node.kubernetes.io/not-ready
          operator: Exists
          effect: NoExecute
          tolerationSeconds: 300
        - key: node.kubernetes.io/unreachable
          operator: Exists
          effect: NoExecute
          tolerationSeconds: 300

affinity: 
       schedulerName: default-scheduler
# cat charts/tomcat/templates/deployment.yaml 
{{- $releaseName := .Release.Name -}}
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: {{ .Release.Name }}
  namespace: {{ .Values.namespace }}
  labels:
    app: {{ .Values.namespace }}
    version: {{ .Values.version }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ .Release.Name  }}
      version: {{ .Values.version }}
  template:
    metadata:
      labels:
        app: {{ .Release.Name }}
        version: {{ .Values.version }}
    strategy:
     type: RollingUpdate
     rollingUpdate:
       maxUnavailable: 25%
       maxSurge: 25%
    revisionHistoryLimit: 10
    progressDeadlineSeconds: 600
    spec:
      volumes:
        - name: hb-lan-server-xml
          configMap:
           name: hb-lan-server-xml
           items:
             - key: server.xml
               path: server.xml

        - name: vol-localtime
          hostPath:
            path: /etc/localtime
            type: ''
        - name: mfsdata
          persistentVolumeClaim:
            claimName: {{ .Values.mfsdata }}
        - name: pp-agent
          emptyDir: {}

      imagePullSecrets:
       - name: harborsecret

      initContainers:
        - name: init-pinpoint
          image: 'harbor.59iedu.com/fjhb/pp_agent:latest'
          command:
            - sh
            - '-c'
            - cp -rp /var/lib/pp_agent/* /var/init/pinpoint
          resources: {}
          volumeMounts:
            - name: pp-agent
              mountPath: /var/init/pinpoint
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: Always

      containers:
        - name: {{ .Release.Name }}
          image: {{ .Values.image }}
          imagePullPolicy: {{ .Values.pullPolicy }}
          terminationGracePeriodSeconds: 30
          dnsPolicy: ClusterFirst
          securityContext: {}
          lifecycle:
           preStop:
            exec:
             command: ["/bin/bash", "-c", "PID=`pidof java` && kill -SIGTERM $PID && while ps -p $PID > /dev/null; do sleep 1; done;"]

          envFrom:
          - configMapRef:
               name: center-config
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: CATALINA_OPTS
              value: >-
                -javaagent:/var/init/pinpoint/pinpoint-bootstrap.jar
                -Dpinpoint.agentId=${POD_IP}
                -Dpinpoint.applicationName=test1-{{ .Release.Name }}
            - name: JAVA_OPTS
              value: >- 
                {{ .Values.env }}
          ports:
            - name: http
              containerPort: {{ .Values.service.port }}
              protocol: TCP
          livenessProbe:
            tcpSocket:
              port: {{ .Values.service.port }}
            initialDelaySeconds: 60
            timeoutSeconds: 2
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            tcpSocket:
              port: {{ .Values.service.dubboport }}
            initialDelaySeconds: 120
            timeoutSeconds: 3
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          volumeMounts:
            - name: hb-lan-server-xml 
              mountPath: /home/tomcat/conf/server.xml
              subPath: server.xml
            - name: vol-localtime
              readOnly: true
              mountPath: /etc/localtime
            - name: mfsdata
              mountPath: /mnt/mfs
            - name: pp-agent
              mountPath: /var/init/pinpoint
          resources:
{{ toYaml .Values.resources | indent 12 }}
    {{- with .Values.nodeSelector }}
      nodeSelector:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.affinity }}
      affinity:
        podAntiAffinity:
          PreferredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - {{ $releaseName }}
            topologyKey: "kubernetes.io/hostname"
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.tolerations }}
      tolerations:
{{ toYaml . | indent 8 }}
    {{- end }}
# cat charts/tomcat/templates/ingress.yaml 
{{- if .Values.ingress.enabled -}}
{{- $servicePort := .Values.service.port -}}
{{- $ingressPath := .Values.ingress.path -}}
{{- $releaseName := .Release.Name -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: {{ .Release.Name }}
  namespace: {{ .Values.namespace }}
  labels:
    app: {{ .Release.Name }}
    version: {{ .Values.version }}
{{- with .Values.ingress.annotations }}
  annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
  tls:
  {{- range .Values.ingress.tls }}
    - hosts:
      {{- range .hosts }}
        - {{ . }}
      {{- end }}
      secretName: {{ .secretName }}
  {{- end }}
{{- end }}
  rules:
  {{- range .Values.ingress.hosts }}
    - host: {{ . }}
      http:
        paths:
          - path: {{ $ingressPath }}
            backend:
              serviceName: {{ $releaseName }}
              servicePort: {{ $servicePort }}
  {{- end }}
{{- end }}
# cat charts/tomcat/templates/service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: {{ .Release.Name  }}
  namespace: {{ .Values.namespace }}
  labels:
    app: {{ .Release.Name  }}
    version: {{ .Values.version }}
spec:
  type: {{ .Values.service.type }}
  ports:
    - port: {{ .Values.service.port }}
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app: {{ .Release.Name  }}
    version: {{ .Values.version }}

使用Helm简化K8S应用管理 3、运行测试

# helm install --debug --dry-run /root/basic/charts/namespace/  \
--set namespace=test3

# helm install --debug --dry-run /root/basic/charts/tomcat/   \
--name tomcat-test \
--set namespace=test3  \
--set mfsdata=mfsdata-test3 \
--set replicaCount=2 \
--set image=harbor.59iedu.com/dev/tomcat_base:v1.1-20181127 \
--set ingress.enabled=true \
--set ingress.hosts={tomcat.59iedu.com} \
--set resources.limits.cpu=2000m \
--set resources.limits.memory=2Gi \
--set resources.requests.cpu=500m \
--set resources.requests.memory=1Gi \
--set service.dubboport=8080

4、创建release

# helm install  /root/basic/charts/namespace/ --set namespace=test3

使用Helm简化K8S应用管理 使用Helm简化K8S应用管理 使用Helm简化K8S应用管理

# helm install  /root/basic/charts/tomcat/   \
--name tomcat-test \
--set namespace=test3  \
--set mfsdata=mfsdata-test3 \
--set replicaCount=2 \
--set image=harbor.59iedu.com/dev/tomcat_base:v1.1-20181127 \
--set ingress.enabled=true \
--set ingress.hosts={tomcat.59iedu.com} \
--set resources.limits.cpu=2000m \
--set resources.limits.memory=2Gi \
--set resources.requests.cpu=500m \
--set resources.requests.memory=1Gi \
--set service.dubboport=8080

使用Helm简化K8S应用管理 使用Helm简化K8S应用管理 使用Helm简化K8S应用管理 使用Helm简化K8S应用管理 使用Helm简化K8S应用管理

四、更新与回滚

# helm upgrade --install tomcat-test \
--values /root/basic/charts/tomcat/values.yaml   \
--set namespace=test3  \
--set mfsdata=mfsdata-test3 \
--set replicaCount=1 \
--set image=harbor.59iedu.com/dev/tomcat_base:v1.1-20181127 \
--set ingress.enabled=true \
--set ingress.hosts={tomcat1.59iedu.com} \
--set resources.limits.cpu=200m \
--set resources.limits.memory=1Gi \
--set resources.requests.cpu=100m \
--set resources.requests.memory=0.5Gi \
--set service.dubboport=8080 \
/root/basic/charts/tomcat

使用Helm简化K8S应用管理 万一更新失败,可选择回滚

# helm rollback tomcat-test 1  
# helm history tomcat-test 
# helm get --revision 1 tomcat-test

使用Helm简化K8S应用管理 使用Helm简化K8S应用管理 使用Helm简化K8S应用管理


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

Computers and Intractability

Computers and Intractability

M R Garey、D S Johnson / W. H. Freeman / 1979-4-26 / GBP 53.99

This book's introduction features a humorous story of a man with a line of people behind him, who explains to his boss, "I can't find an efficient algorithm, but neither can all these famous people." ......一起来看看 《Computers and Intractability》 这本书的介绍吧!

图片转BASE64编码
图片转BASE64编码

在线图片转Base64编码工具

XML、JSON 在线转换
XML、JSON 在线转换

在线XML、JSON转换工具

UNIX 时间戳转换
UNIX 时间戳转换

UNIX 时间戳转换