K8S部署rocketmq5全過程
背景
需要在開發(fā)環(huán)境部署rocketmq5驗證新版本proxy相關(guān)的特性,而開發(fā)環(huán)境沒有helm和外網(wǎng),有k8s的環(huán)境
發(fā)現(xiàn)網(wǎng)上也沒有太多資料,記錄一下操作流程
操作流程
1. helm庫拉取rocketmq5鏡像
用的是某個大佬上傳的helm庫鏡像:
## 添加 helm 倉庫 helm repo add rocketmq-repo https://helm-charts.itboon.top/rocketmq helm repo update rocketmq-repo ## 查看鏡像 helm search rocketmq 拉取鏡像到本地 兩個都拉 helm pull itboon/rocketmq helm pull itboon/rocketmq-cluster 解壓 tar -zxf rocketmq.tgz
2. 單集群啟動測試
進入目錄修改value.yaml文件:
clusterName: "rocketmq-helm" image: repository: "apache/rocketmq" pullPolicy: IfNotPresent tag: "5.3.0" podSecurityContext: fsGroup: 3000 runAsUser: 3000 broker: size: master: 1 replica: 0 # podSecurityContext: {} # containerSecurityContext: {} master: brokerRole: ASYNC_MASTER jvm: maxHeapSize: 256M # javaOptsOverride: "" resources: limits: cpu: 2 memory: 512Mi requests: cpu: 200m memory: 256Mi replica: jvm: maxHeapSize: 256M # javaOptsOverride: "" resources: limits: cpu: 4 memory: 512Mi requests: cpu: 50m memory: 256Mi hostNetwork: false persistence: enabled: true size: 100Mi #storageClass: "local-storage" aclConfigMapEnabled: false aclConfig: | globalWhiteRemoteAddresses: - '*' - 10.*.*.* - 192.168.*.* config: ## brokerClusterName brokerName brokerRole brokerId 由內(nèi)置腳本自動生成 deleteWhen: "04" fileReservedTime: "48" flushDiskType: "ASYNC_FLUSH" waitTimeMillsInSendQueue: "1000" # aclEnable: true affinityOverride: {} tolerations: [] nodeSelector: {} ## broker.readinessProbe readinessProbe: tcpSocket: port: main initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 3 failureThreshold: 6 nameserver: replicaCount: 1 jvm: maxHeapSize: 256M # javaOptsOverride: "" resources: limits: cpu: 2 memory: 256Mi ephemeral-storage: 256Mi requests: cpu: 100m memory: 256Mi ephemeral-storage: 256Mi persistence: enabled: false size: 256Mi #storageClass: "local-storage" affinityOverride: {} tolerations: [] nodeSelector: {} ## nameserver.readinessProbe readinessProbe: tcpSocket: port: main initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 3 failureThreshold: 6 ## nameserver.service service: annotations: {} type: ClusterIP proxy: enabled: true replicaCount: 1 jvm: maxHeapSize: 600M # javaOptsOverride: "" resources: limits: cpu: 2 memory: 512Mi requests: cpu: 100m memory: 256Mi affinityOverride: {} tolerations: [] nodeSelector: {} ## proxy.readinessProbe readinessProbe: tcpSocket: port: main initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 3 failureThreshold: 6 ## proxy.service service: annotations: {} type: ClusterIP dashboard: enabled: true replicaCount: 1 image: repository: "apacherocketmq/rocketmq-dashboard" pullPolicy: IfNotPresent tag: "1.0.0" auth: enabled: true users: - name: admin password: admin isAdmin: true - name: user01 password: userPass jvm: maxHeapSize: 256M resources: limits: cpu: 1 memory: 512Mi requests: cpu: 20m memory: 512Mi ## dashboard.readinessProbe readinessProbe: failureThreshold: 6 httpGet: path: / port: http livenessProbe: {} service: annotations: {} type: ClusterIP # nodePort: 31007 ingress: enabled: false className: "" annotations: {} # nginx.ingress.kubernetes.io/whitelist-source-range: 10.0.0.0/8,124.160.30.50 hosts: - host: rocketmq-dashboard.example.com tls: [] # - secretName: example-tls # hosts: # - rocketmq-dashboard.example.com ## controller mode is an experimental feature controllerModeEnabled: false controller: enabled: false jvm: maxHeapSize: 256M # javaOptsOverride: "" resources: limits: cpu: 2 memory: 512Mi requests: cpu: 100m memory: 256Mi persistence: enabled: true size: 256Mi accessModes: - ReadWriteOnce ## controller.service service: annotations: {} ## controller.config config: controllerDLegerGroup: group1 enableElectUncleanMaster: false notifyBrokerRoleChanged: true ## controller.readinessProbe readinessProbe: tcpSocket: port: main initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 3 failureThreshold: 6
helm啟動
helm upgrade --install rocketmq \ --namespace rocketmq-demo \ --create-namespace \ --set broker.persistence.enabled="false" \ ./rocketmq
3. sc/pv配置
采用的掛載本地的方式設(shè)置:
SC:
vi sc_local.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-storage annotations: openebs.io/cas-type: local storageclass.kubernetes.io/is-default-class: "false" cas.openebs.io/config: | #hostpath type will create a PV by # creating a sub-directory under the # BASEPATH provided below. - name: StorageType value: "hostpath" #Specify the location (directory) where # where PV(volume) data will be saved. # A sub-directory with pv-name will be # created. When the volume is deleted, # the PV sub-directory will be deleted. #Default value is /var/openebs/local - name: BasePath value: "/tmp/storage" provisioner: openebs.io/local volumeBindingMode: Immediate reclaimPolicy: Retain kubectl apply -f sc_local.yaml
PV(只broker):
vi local_pv.yaml apiVersion: v1 kind: PersistentVolume metadata: labels: type: local name: broker-storage-rocketmq-broker-master-0 namespace: rocketmq-demo spec: accessModes: - ReadWriteOnce capacity: storage: 100Mi hostPath: path: /tmp/storage persistentVolumeReclaimPolicy: Recycle storageClassName: local-storage volumeMode: Filesystem --- apiVersion: v1 kind: PersistentVolume metadata: labels: type: local name: broker-storage-rocketmq-broker-replica-id1-0 namespace: rocketmq-demo spec: accessModes: - ReadWriteOnce capacity: storage: 100Mi hostPath: path: /tmp/storageSlave persistentVolumeReclaimPolicy: Recycle storageClassName: local-storage volumeMode: Filesystem kubectl apply -f local_pv.yaml kubectl delete pv --all
4.集群啟動測試
修改value.yaml,主要降低了配置:
clusterName: "rocketmq-helm" nameOverride: rocketmq image: repository: "apache/rocketmq" pullPolicy: IfNotPresent tag: "5.3.0" podSecurityContext: fsGroup: 3000 runAsUser: 3000 broker: size: master: 1 replica: 1 # podSecurityContext: {} # containerSecurityContext: {} master: brokerRole: ASYNC_MASTER jvm: maxHeapSize: 512M # javaOptsOverride: "" resources: limits: cpu: 2 memory: 512Mi requests: cpu: 100m memory: 128Mi replica: jvm: maxHeapSize: 256M # javaOptsOverride: "" resources: limits: cpu: 2 memory: 256Mi requests: cpu: 50m memory: 128Mi hostNetwork: false persistence: enabled: true size: 100Mi #storageClass: "local-storage" aclConfigMapEnabled: false aclConfig: | globalWhiteRemoteAddresses: - '*' - 10.*.*.* - 192.168.*.* config: ## brokerClusterName brokerName brokerRole brokerId 由內(nèi)置腳本自動生成 deleteWhen: "04" fileReservedTime: "48" flushDiskType: "ASYNC_FLUSH" waitTimeMillsInSendQueue: "1000" # aclEnable: true affinityOverride: {} tolerations: [] nodeSelector: {} ## broker.readinessProbe readinessProbe: tcpSocket: port: main initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 3 failureThreshold: 6 nameserver: replicaCount: 1 jvm: maxHeapSize: 256M # javaOptsOverride: "" resources: limits: cpu: 1 memory: 256Mi ephemeral-storage: 256Mi requests: cpu: 100m memory: 128Mi ephemeral-storage: 128Mi persistence: enabled: false size: 128Mi #storageClass: "local-storage" affinityOverride: {} tolerations: [] nodeSelector: {} ## nameserver.readinessProbe readinessProbe: tcpSocket: port: main initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 3 failureThreshold: 6 ## nameserver.service service: annotations: {} type: ClusterIP proxy: enabled: true replicaCount: 2 jvm: maxHeapSize: 512M # javaOptsOverride: "" resources: limits: cpu: 2 memory: 512Mi requests: cpu: 100m memory: 256Mi affinityOverride: {} tolerations: [] nodeSelector: {} ## proxy.readinessProbe readinessProbe: tcpSocket: port: main initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 3 failureThreshold: 6 ## proxy.service service: annotations: {} type: ClusterIP dashboard: enabled: false replicaCount: 1 image: repository: "apacherocketmq/rocketmq-dashboard" pullPolicy: IfNotPresent tag: "1.0.0" auth: enabled: true users: - name: admin password: admin isAdmin: true - name: user01 password: userPass jvm: maxHeapSize: 256M resources: limits: cpu: 1 memory: 256Mi requests: cpu: 20m memory: 128Mi ## dashboard.readinessProbe readinessProbe: failureThreshold: 6 httpGet: path: / port: http livenessProbe: {} service: annotations: {} type: ClusterIP # nodePort: 31007 ingress: enabled: false className: "" annotations: {} # nginx.ingress.kubernetes.io/whitelist-source-range: 10.0.0.0/8,124.160.30.50 hosts: - host: rocketmq-dashboard.example.com tls: [] # - secretName: example-tls # hosts: # - rocketmq-dashboard.example.com ## controller mode is an experimental feature controllerModeEnabled: false controller: enabled: false replicaCount: 3 jvm: maxHeapSize: 256M # javaOptsOverride: "" resources: limits: cpu: 2 memory: 256Mi requests: cpu: 100m memory: 128Mi persistence: enabled: true size: 128Mi accessModes: - ReadWriteOnce ## controller.service service: annotations: {} ## controller.config config: controllerDLegerGroup: group1 enableElectUncleanMaster: false notifyBrokerRoleChanged: true ## controller.readinessProbe readinessProbe: tcpSocket: port: main initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 3 failureThreshold: 6
5.離線安裝
helm導(dǎo)出yaml文件:
helm template rocketmq ./rocketmq-cluster --output-dir ./rocketmq-cluster-yaml
注意,轉(zhuǎn)成yaml文件后,原本用helm設(shè)置的namespace沒了。
執(zhí)行yaml文件驗證:
kubectl apply -f rocketmq-cluster-yaml/ --recursive kubectl delete -f rocketmq-cluster-yaml/ --recursive
yaml導(dǎo)出:
## 安裝傳輸工具 yum install lrzsz ## 打包yaml文件夾 tar czvf folder.tar.gz itboon sz folder.tar.gz
附錄
最后生成的部署yaml:
- nameserver
--- # Source: rocketmq-cluster/templates/nameserver/statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: "rocketmq-nameserver" namespace: rocketmq labels: helm.sh/chart: rocketmq-cluster-12.3.2 app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq app.kubernetes.io/version: "5.3.0" app.kubernetes.io/managed-by: Helm spec: minReadySeconds: 20 replicas: 1 podManagementPolicy: Parallel selector: matchLabels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: nameserver serviceName: "rocketmq-nameserver-headless" template: metadata: annotations: checksum/config: 9323bc706d85f980c210e9823264a63548598b649c4935f9db6559d4fecbcc93 labels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: nameserver spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 5 podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: nameserver topologyKey: kubernetes.io/hostname securityContext: fsGroup: 3000 runAsUser: 3000 containers: - name: nameserver image: "apache/rocketmq:5.3.0" imagePullPolicy: IfNotPresent command: - sh - /mq-server-start.sh env: - name: ROCKETMQ_PROCESS_ROLE value: nameserver - name: ROCKETMQ_JAVA_OPTIONS_HEAP value: -Xms512M -Xmx512M ports: - containerPort: 9876 name: main protocol: TCP resources: limits: cpu: 1 ephemeral-storage: 512Mi memory: 512Mi requests: cpu: 100m ephemeral-storage: 256Mi memory: 256Mi readinessProbe: failureThreshold: 6 initialDelaySeconds: 10 periodSeconds: 10 tcpSocket: port: main timeoutSeconds: 3 lifecycle: preStop: exec: command: ["sh", "-c", "sleep 5; ./mqshutdown namesrv"] volumeMounts: - mountPath: /mq-server-start.sh name: mq-server-start-sh subPath: mq-server-start.sh - mountPath: /etc/rocketmq/base-cm name: base-cm - mountPath: /home/rocketmq/logs name: nameserver-storage subPath: logs dnsPolicy: ClusterFirst terminationGracePeriodSeconds: 15 volumes: - configMap: items: - key: mq-server-start.sh path: mq-server-start.sh name: rocketmq-server-config defaultMode: 0755 name: mq-server-start-sh - configMap: name: rocketmq-server-config name: base-cm - name: nameserver-storage emptyDir: {}
--- # Source: rocketmq-cluster/templates/nameserver/svc.yaml apiVersion: v1 kind: Service metadata: name: rocketmq-nameserver labels: helm.sh/chart: rocketmq-cluster-12.3.2 app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq app.kubernetes.io/version: "5.3.0" app.kubernetes.io/managed-by: Helm component: nameserver spec: ports: - port: 9876 protocol: TCP targetPort: 9876 selector: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: nameserver type: "ClusterIP"
--- # Source: rocketmq-cluster/templates/nameserver/svc-headless.yaml apiVersion: v1 kind: Service metadata: name: "rocketmq-nameserver-headless" labels: helm.sh/chart: rocketmq-cluster-12.3.2 app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq app.kubernetes.io/version: "5.3.0" app.kubernetes.io/managed-by: Helm component: nameserver spec: clusterIP: "None" publishNotReadyAddresses: true ports: - port: 9876 protocol: TCP targetPort: 9876 selector: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: nameserver
- broker
--- # Source: rocketmq-cluster/templates/broker/statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: rocketmq-broker-master namespace: rocketmq labels: helm.sh/chart: rocketmq-cluster-12.3.2 app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq app.kubernetes.io/version: "5.3.0" app.kubernetes.io/managed-by: Helm spec: minReadySeconds: 20 replicas: 1 podManagementPolicy: OrderedReady selector: matchLabels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: broker broker: rocketmq-broker-master serviceName: "" template: metadata: annotations: checksum/config: 9323bc706d85f980c210e9823264a63548598b649c4935f9db6559d4fecbcc93 labels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: broker broker: rocketmq-broker-master spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 5 podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: broker topologyKey: kubernetes.io/hostname securityContext: fsGroup: 3000 runAsUser: 3000 containers: - name: broker image: "apache/rocketmq:5.3.0" imagePullPolicy: IfNotPresent command: - sh - /mq-server-start.sh env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: ROCKETMQ_PROCESS_ROLE value: broker - name: NAMESRV_ADDR value: rocketmq-nameserver-0.rocketmq-nameserver-headless.rocketmq.svc:9876 - name: ROCKETMQ_CONF_brokerId value: "0" - name: ROCKETMQ_CONF_brokerRole value: "ASYNC_MASTER" - name: ROCKETMQ_JAVA_OPTIONS_HEAP value: -Xms1G -Xmx1G ports: - containerPort: 10909 name: vip protocol: TCP - containerPort: 10911 name: main protocol: TCP - containerPort: 10912 name: ha protocol: TCP resources: limits: cpu: 2 memory: 2Gi requests: cpu: 100m memory: 512Mi readinessProbe: failureThreshold: 6 initialDelaySeconds: 10 periodSeconds: 10 tcpSocket: port: main timeoutSeconds: 3 lifecycle: preStop: exec: command: ["sh", "-c", "sleep 5; ./mqshutdown broker"] volumeMounts: - mountPath: /home/rocketmq/logs name: broker-storage subPath: rocketmq-broker/logs - mountPath: /home/rocketmq/store name: broker-storage subPath: rocketmq-broker/store - mountPath: /etc/rocketmq/broker-base.conf name: broker-base-config subPath: broker-base.conf - mountPath: /mq-server-start.sh name: mq-server-start-sh subPath: mq-server-start.sh dnsPolicy: ClusterFirst terminationGracePeriodSeconds: 30 volumes: - configMap: items: - key: broker-base.conf path: broker-base.conf name: rocketmq-server-config name: broker-base-config - configMap: items: - key: mq-server-start.sh path: mq-server-start.sh name: rocketmq-server-config defaultMode: 0755 name: mq-server-start-sh volumeClaimTemplates: - metadata: name: broker-storage spec: accessModes: - ReadWriteOnce storageClassName: local-path resources: requests: storage: "100Mi" --- # Source: rocketmq-cluster/templates/broker/statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: rocketmq-broker-replica-id1 namespace: rocketmq labels: helm.sh/chart: rocketmq-cluster-12.3.2 app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq app.kubernetes.io/version: "5.3.0" app.kubernetes.io/managed-by: Helm spec: minReadySeconds: 20 replicas: 1 podManagementPolicy: OrderedReady selector: matchLabels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: broker broker: rocketmq-broker-replica-id1 serviceName: "" template: metadata: annotations: checksum/config: 9323bc706d85f980c210e9823264a63548598b649c4935f9db6559d4fecbcc93 labels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: broker broker: rocketmq-broker-replica-id1 spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 5 podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: broker topologyKey: kubernetes.io/hostname securityContext: fsGroup: 3000 runAsUser: 3000 containers: - name: broker image: "apache/rocketmq:5.3.0" imagePullPolicy: IfNotPresent command: - sh - /mq-server-start.sh env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: ROCKETMQ_PROCESS_ROLE value: broker - name: NAMESRV_ADDR value: rocketmq-nameserver-0.rocketmq-nameserver-headless.rocketmq.svc:9876 - name: ROCKETMQ_CONF_brokerId value: "1" - name: ROCKETMQ_CONF_brokerRole value: "SLAVE" - name: ROCKETMQ_JAVA_OPTIONS_HEAP value: -Xms1G -Xmx1G ports: - containerPort: 10909 name: vip protocol: TCP - containerPort: 10911 name: main protocol: TCP - containerPort: 10912 name: ha protocol: TCP resources: limits: cpu: 2 memory: 1Gi requests: cpu: 50m memory: 512Mi readinessProbe: failureThreshold: 6 initialDelaySeconds: 10 periodSeconds: 10 tcpSocket: port: main timeoutSeconds: 3 lifecycle: preStop: exec: command: ["sh", "-c", "sleep 5; ./mqshutdown broker"] volumeMounts: - mountPath: /home/rocketmq/logs name: broker-storage subPath: rocketmq-broker/logs - mountPath: /home/rocketmq/store name: broker-storage subPath: rocketmq-broker/store - mountPath: /etc/rocketmq/broker-base.conf name: broker-base-config subPath: broker-base.conf - mountPath: /mq-server-start.sh name: mq-server-start-sh subPath: mq-server-start.sh dnsPolicy: ClusterFirst terminationGracePeriodSeconds: 30 volumes: - configMap: items: - key: broker-base.conf path: broker-base.conf name: rocketmq-server-config name: broker-base-config - configMap: items: - key: mq-server-start.sh path: mq-server-start.sh name: rocketmq-server-config defaultMode: 0755 name: mq-server-start-sh volumeClaimTemplates: - metadata: name: broker-storage spec: accessModes: - ReadWriteOnce storageClassName: local-path resources: requests: storage: "100Mi"
- proxy
--- # Source: rocketmq-cluster/templates/proxy/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: "rocketmq-proxy" namespace: rocketmq labels: helm.sh/chart: rocketmq-cluster-12.3.2 app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq app.kubernetes.io/version: "5.3.0" app.kubernetes.io/managed-by: Helm spec: minReadySeconds: 20 replicas: 2 selector: matchLabels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: proxy template: metadata: annotations: checksum/config: 9323bc706d85f980c210e9823264a63548598b649c4935f9db6559d4fecbcc93 labels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: proxy spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 5 podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: proxy topologyKey: kubernetes.io/hostname securityContext: fsGroup: 3000 runAsUser: 3000 containers: - name: proxy image: "apache/rocketmq:5.3.0" imagePullPolicy: IfNotPresent command: - sh - /mq-server-start.sh env: - name: NAMESRV_ADDR value: rocketmq-nameserver-0.rocketmq-nameserver-headless.rocketmq.svc:9876 - name: ROCKETMQ_PROCESS_ROLE value: proxy - name: RMQ_PROXY_CONFIG_PATH value: /etc/rocketmq/proxy.json - name: ROCKETMQ_JAVA_OPTIONS_HEAP value: -Xms1G -Xmx1G ports: - name: main containerPort: 8080 protocol: TCP - name: grpc containerPort: 8081 protocol: TCP resources: limits: cpu: 2 memory: 1Gi requests: cpu: 100m memory: 512Mi readinessProbe: failureThreshold: 6 initialDelaySeconds: 10 periodSeconds: 10 tcpSocket: port: main timeoutSeconds: 3 lifecycle: preStop: exec: command: ["sh", "-c", "sleep 5; ./mqshutdown proxy"] volumeMounts: - mountPath: /mq-server-start.sh name: mq-server-start-sh subPath: mq-server-start.sh - mountPath: /etc/rocketmq/proxy.json name: proxy-json subPath: proxy.json dnsPolicy: ClusterFirst terminationGracePeriodSeconds: 15 volumes: - configMap: items: - key: mq-server-start.sh path: mq-server-start.sh name: rocketmq-server-config defaultMode: 0755 name: mq-server-start-sh - configMap: items: - key: proxy.json path: proxy.json name: rocketmq-server-config name: proxy-json
--- # Source: rocketmq-cluster/templates/proxy/service.yaml apiVersion: v1 kind: Service metadata: name: rocketmq-proxy labels: helm.sh/chart: rocketmq-cluster-12.3.2 app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq app.kubernetes.io/version: "5.3.0" app.kubernetes.io/managed-by: Helm component: proxy spec: ports: - port: 8080 name: main protocol: TCP targetPort: 8080 - port: 8081 name: grpc protocol: TCP targetPort: 8081 selector: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: proxy type: "ClusterIP"
- configmap
--- # Source: rocketmq-cluster/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: rocketmq-server-config namespace: rocketmq data: broker-base.conf: | deleteWhen = 04 fileReservedTime = 48 flushDiskType = ASYNC_FLUSH waitTimeMillsInSendQueue = 1000 brokerClusterName = rocketmq-helm controller-base.conf: | controllerDLegerGroup = group1 enableElectUncleanMaster = false notifyBrokerRoleChanged = true controllerDLegerPeers = n0-rocketmq-controller-0.rocketmq-controller.rocketmq.svc:9878;n1-rocketmq-controller-1.rocketmq-controller.rocketmq.svc:9878;n2-rocketmq-controller-2.rocketmq-controller.rocketmq.svc:9878 controllerStorePath = /home/rocketmq/controller-data proxy.json: | { "rocketMQClusterName": "rocketmq-helm" } mq-server-start.sh: | java -version if [ $? -ne 0 ]; then echo "[ERROR] Missing java runtime" exit 50 fi if [ -z "${ROCKETMQ_HOME}" ]; then echo "[ERROR] Missing env ROCKETMQ_HOME" exit 50 fi if [ -z "${ROCKETMQ_PROCESS_ROLE}" ]; then echo "[ERROR] Missing env ROCKETMQ_PROCESS_ROLE" exit 50 fi export JAVA_HOME=$(dirname $(dirname $(readlink -f $(which java)))) export CLASSPATH=".:${ROCKETMQ_HOME}/conf:${ROCKETMQ_HOME}/lib/*:${CLASSPATH}" JAVA_OPT="${JAVA_OPT} -server" if [ -n "$ROCKETMQ_JAVA_OPTIONS_OVERRIDE" ]; then JAVA_OPT="${JAVA_OPT} ${ROCKETMQ_JAVA_OPTIONS_OVERRIDE}" else JAVA_OPT="${JAVA_OPT} -XX:+UseG1GC" JAVA_OPT="${JAVA_OPT} ${ROCKETMQ_JAVA_OPTIONS_EXT}" JAVA_OPT="${JAVA_OPT} ${ROCKETMQ_JAVA_OPTIONS_HEAP}" fi JAVA_OPT="${JAVA_OPT} -cp ${CLASSPATH}" export BROKER_CONF_FILE="$HOME/broker.conf" export CONTROLLER_CONF_FILE="$HOME/controller.conf" update_broker_conf() { local key=$1 local value=$2 sed -i "/^${key} *=/d" ${BROKER_CONF_FILE} echo "${key} = ${value}" >> ${BROKER_CONF_FILE} } init_broker_role() { if [ "${ROCKETMQ_CONF_brokerRole}" = "SLAVE" ]; then update_broker_conf "brokerRole" "SLAVE" elif [ "${ROCKETMQ_CONF_brokerRole}" = "SYNC_MASTER" ]; then update_broker_conf "brokerRole" "SYNC_MASTER" else update_broker_conf "brokerRole" "ASYNC_MASTER" fi if echo "${ROCKETMQ_CONF_brokerId}" | grep -E '^[0-9]+$'; then update_broker_conf "brokerId" "${ROCKETMQ_CONF_brokerId}" fi } init_broker_conf() { rm -f ${BROKER_CONF_FILE} cp /etc/rocketmq/broker-base.conf ${BROKER_CONF_FILE} echo "" >> ${BROKER_CONF_FILE} echo "# generated config" >> ${BROKER_CONF_FILE} broker_name_seq=${HOSTNAME##*-} if [ -n "$MY_POD_NAME" ]; then broker_name_seq=${MY_POD_NAME##*-} fi update_broker_conf "brokerName" "broker-g${broker_name_seq}" if [ "$enableControllerMode" != "true" ]; then init_broker_role fi echo "[exec] cat ${BROKER_CONF_FILE}" cat ${BROKER_CONF_FILE} } init_acl_conf() { if [ -f /etc/rocketmq/acl/plain_acl.yml ]; then rm -f "${ROCKETMQ_HOME}/conf/plain_acl.yml" ln -sf "/etc/rocketmq/acl" "${ROCKETMQ_HOME}/conf/acl" fi } init_controller_conf() { rm -f ${CONTROLLER_CONF_FILE} cp /etc/rocketmq/base-cm/controller-base.conf ${CONTROLLER_CONF_FILE} controllerDLegerSelfId="n${HOSTNAME##*-}" if [ -n "$MY_POD_NAME" ]; then controllerDLegerSelfId="n${MY_POD_NAME##*-}" fi sed -i "/^controllerDLegerSelfId *=/d" ${CONTROLLER_CONF_FILE} echo "controllerDLegerSelfId = ${controllerDLegerSelfId}" >> ${CONTROLLER_CONF_FILE} cat ${CONTROLLER_CONF_FILE} } if [ "$ROCKETMQ_PROCESS_ROLE" = "broker" ]; then init_broker_conf init_acl_conf set -x java ${JAVA_OPT} org.apache.rocketmq.broker.BrokerStartup -c ${BROKER_CONF_FILE} elif [ "$ROCKETMQ_PROCESS_ROLE" = "controller" ]; then init_controller_conf set -x java ${JAVA_OPT} org.apache.rocketmq.controller.ControllerStartup -c ${CONTROLLER_CONF_FILE} elif [ "$ROCKETMQ_PROCESS_ROLE" = "nameserver" ] || [ "$ROCKETMQ_PROCESS_ROLE" = "mqnamesrv" ]; then set -x if [ "$enableControllerInNamesrv" = "true" ]; then init_controller_conf java ${JAVA_OPT} org.apache.rocketmq.namesrv.NamesrvStartup -c ${CONTROLLER_CONF_FILE} else java ${JAVA_OPT} org.apache.rocketmq.namesrv.NamesrvStartup fi elif [ "$ROCKETMQ_PROCESS_ROLE" = "proxy" ]; then set -x if [ -f $RMQ_PROXY_CONFIG_PATH ]; then java ${JAVA_OPT} org.apache.rocketmq.proxy.ProxyStartup -pc $RMQ_PROXY_CONFIG_PATH else java ${JAVA_OPT} org.apache.rocketmq.proxy.ProxyStartup fi else echo "[ERROR] Missing env ROCKETMQ_PROCESS_ROLE" exit 50 fi
踩坑
- 存儲權(quán)限問題
配置完pv后啟動一直報錯。
查日志:
kubectl describe pod rocketmq-broker-master-0 -n rocketmq-demo
結(jié)果:
查應(yīng)用啟動日志:
kubectl logs rocketmq-broker-master-0 -n rocketmq-demo
結(jié)果:
具體錯誤信息:
03:30:58,822 |-ERROR in org.apache.rocketmq.logging.ch.qos.logback.core.rolling.RollingFileAppender[RocketmqAuthAuditAppender_inner] - Failed to create parent directories for [/home/rocketmq/logs/rocketmqlogs/auth_audit.log]
03:30:58,822 |-ERROR in org.apache.rocketmq.logging.ch.qos.logback.core.rolling.RollingFileAppender[RocketmqAuthAuditAppender_inner] - openFile(/home/rocketmq/logs/rocketmqlogs///auth_audit.log,true) call failed. java.io.FileNotFoundException: /home/rocketmq/logs/rocketmqlogs/auth_audit.log (No such file or directory)
at java.io.FileNotFoundException: /home/rocketmq/logs/rocketmqlogs/auth_audit.log (No such file or directory)
java.lang.NullPointerException
at org.apache.rocketmq.broker.schedule.ScheduleMessageService.configFilePath(ScheduleMessageService.java:272)
at org.apache.rocketmq.common.ConfigManager.persist(ConfigManager.java:83)
at org.apache.rocketmq.broker.BrokerController.shutdownBasicService(BrokerController.java:1478)
at org.apache.rocketmq.broker.BrokerController.shutdown(BrokerController.java:1565)
at org.apache.rocketmq.broker.BrokerStartup.createBrokerController(BrokerStartup.java:250)
at org.apache.rocketmq.broker.BrokerStartup.main(BrokerStartup.java:52)
網(wǎng)上查了下是掛載的本地目錄,pod沒有權(quán)限讀寫。
解決的方式:
1、移出root目錄
由于是root用戶賬號,k8s啟動用的kubectl賬號,把掛載的目錄移到了/tmp,修改上文PV文件。
2、提前創(chuàng)建PV目錄
在/tmp目錄下創(chuàng)建/tmp/storage,不然啟動會報PVC沒有該目錄
3、chmod開啟目錄及子目錄的讀寫權(quán)限,需要帶上-R遞歸修改所有子目錄
chmod -R 777 storage
- 主從副本問題
修改完文件權(quán)限啟動后又報以下錯誤:
而且Broker一主一從,一個正常啟動,另一個報這個錯誤。網(wǎng)上查了下,正常會出現(xiàn)在同一個機器部署了兩個Broker的情況下。
但我們的環(huán)境是K8S集群,節(jié)點之間理應(yīng)是隔離的,所以猜想是storage掛載了同一個目錄的問題,修改PV,兩個PV掛載的目錄不同,改為storageSlave。再次啟動后成功。
- 命名空間問題
轉(zhuǎn)移到開發(fā)環(huán)境后報,啟動Broker和Namesrver正常,但proxy啟動不了,報:
org.apache.rocketmq.proxy.common.ProxyException: create system broadcast topic DefaultHeartBeatSyncerTopic failed on cluster rocketmq-helm
在本地環(huán)境轉(zhuǎn)完yaml啟動時沒報過。
網(wǎng)上查了下,如果broker沒正確配置nameserver,也會報這個錯誤。懷疑是環(huán)境變了后,某些配置需要根據(jù)環(huán)境修改。把目錄下的配置都仔細(xì)研究了下,尤其涉及broker和proxy的nameserver地址的。
還有個地方有差異,由于開發(fā)環(huán)境多人共用,有許多的應(yīng)用在跑,而導(dǎo)出的yaml文件會默認(rèn)在K8S的default namespace啟動pod,容易造成混亂和不好管理。所以嘗試在yaml文件中加入了namespace:rocketmq。
最后排查確實由于這個導(dǎo)致,在proxy和broker的配置文件中,還有這句讀取nameserver地址的語句:
需要將其中的:
value: rocketmq-nameserver-0.rocketmq-nameserver-headless.default.svc:9876
改為:
value: rocketmq-nameserver-0.rocketmq-nameserver-headless.rocketmq.svc:9876
這個環(huán)境變量提供了 RocketMQ NameServer 的地址和端口。rocketmq-nameserver-0.rocketmq-nameserver-headless.default.svc 是 NameServer Pod 的 DNS 名稱,9876 是 NameServer 服務(wù)的端口。這個地址用于客戶端或 Broker 連接到 NameServer,以便進行服務(wù)發(fā)現(xiàn)和元數(shù)據(jù)同步。
其中,rocketmq-nameserver-0是當(dāng)前nameserver的name,rocketmq-nameserver-headless對應(yīng)headless service的name,default對應(yīng)namespace。所以部署新的K8S命名空間后,需要也把這里的default改為rocketmq的namespace,否則就會報找不到無法創(chuàng)建topit的錯誤。不過,這里挺奇怪的,broker能正常啟動,只有啟動proxy的時候才會報這個錯誤,估計Rocketmq5新版本做了什么修改。
看了別人debug啟動源碼,指出是這里的問題:
在BrokerStartup.java閱讀發(fā)現(xiàn)這個是namesrv的地址,如果不添加的話,會導(dǎo)致即使你啟動了broker,但其實并不會在namesrv上有任何注冊信息。
如果不配置會發(fā)生什么呢,主要體現(xiàn)在proxy啟動的時候,就一定會報錯
create system broadcast topic DefaultHeartBeatSyncerTopic failed on cluster DefaultCluster
總結(jié)
以上為個人經(jīng)驗,希望能給大家一個參考,也希望大家多多支持腳本之家。
相關(guān)文章
Rainbond功能架構(gòu)及應(yīng)用管理官方文檔介紹
這篇文章主要為大家介紹了Rainbond功能機構(gòu)及使用官方文檔,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進步,早日升職加薪2022-04-04如何在 K8S 中使用 Values 文件定制不同環(huán)境下的應(yīng)用配置
Kubernetes是一個開源的容器編排平臺,它可以自動化容器的部署、擴展和管理,在 K8s 中,應(yīng)用程序通常以容器的形式運行,這些容器被組織在不同的資源對象中,這篇文章主要介紹了如何在 K8S 中使用 Values 文件定制不同環(huán)境下的應(yīng)用配置,需要的朋友可以參考下2025-03-03K8S-ConfigMap實現(xiàn)應(yīng)用和配置分離詳解
這篇文章主要為大家介紹了K8S-ConfigMap實現(xiàn)應(yīng)用和配置分離詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進步,早日升職加薪2023-04-04二進制方式安裝?Kubernetes1.18.3版本實現(xiàn)腳本
這篇文章主要為大家介紹了二進制方式安裝Kubernetes1.18.3版本實現(xiàn)腳本,希望能夠有所幫助,祝大家多多進步,早日升職加薪2022-03-03Kubernetes應(yīng)用服務(wù)質(zhì)量管理詳解
這篇文章主要為大家介紹了Kubernetes應(yīng)用服務(wù)質(zhì)量管理詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進步,早日升職加薪2022-11-11K8S?實用工具之合并多個kubeconfig實現(xiàn)詳解
這篇文章主要為大家介紹了K8S?實用工具之合并多個kubeconfig實現(xiàn)詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進步,早日升職加薪2023-03-03