欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

教你在k8s上部署HADOOP-3.2.2(HDFS)的方法

 更新時(shí)間:2022年04月07日 15:11:06   作者:Oh寶貝兒  
這篇文章主要介紹了k8s-部署HADOOP-3.2.2(HDFS)的方法,本文給大家介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或工作具有一定的參考借鑒價(jià)值,需要的朋友可以參考下

環(huán)境+版本
k8s: v1.21.1
hadoop: 3.2.2

dockerfile

FROM openjdk:8-jdk
# 如果要通過ssh連接容器內(nèi)部,添加自己的公鑰(非必須)
ARG SSH_PUB='ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC3nTRJ/aVb67l1xMaN36jmIbabU7Hiv/xpZ8bwLVvNO3Bj7kUzYTp7DIbPcHQg4d6EsPC6j91E8zW6CrV2fo2Ai8tDO/rCq9Se/64F3+8oEIiI6E/OfUZfXD1mPbG7M/kcA3VeQP6wxNPhWBbKRisqgUc6VTKhl+hK6LwRTZgeShxSNcey+HZst52wJxjQkNG+7CAEY5bbmBzAlHCSl4Z0RftYTHR3q8LcEg7YLNZasUogX68kBgRrb+jw1pRMNo7o7RI9xliDAGX+E4C3vVZL0IsccKgr90222axsADoEjC9O+Q6uwKjahemOVaau+9sHIwkelcOcCzW5SuAwkezv 805899926@qq.com'
RUN apt-get update;
RUN apt-get install -y openssh-server net-tools vim git;
RUN sed -i -r 's/^\s*UseDNS\s+\w+/#\0/; s/^\s*PasswordAuthentication\s+\w+/#\0/; s/^\s*ClientAliveInterval\s+\w+/#\0/' /etc/ssh/sshd_config;
RUN echo 'UseDNS no \nPermitRootLogin yes \nPasswordAuthentication yes \nClientAliveInterval 30' >> /etc/ssh/sshd_config;
RUN cat /etc/ssh/sshd_config
RUN su root bash -c 'cd;mkdir .ssh;chmod 700 .ssh;echo ${SSH_PUB} > .ssh/authorized_keys;chmod 644 .ssh/authorized_keys'
RUN su root bash -c 'cd;ssh-keygen -t rsa -f ~/.ssh/id_rsa; cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys'
# hadoop
ENV HADOOP_TGZ_URL=https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-3.2.2/hadoop-3.2.2.tar.gz
ENV HADOOP_HOME=/opt/hadoop
ENV PATH=$HADOOP_HOME/bin:$PATH
RUN set -ex; \
    mkdir -p $HADOOP_HOME; \
    wget -nv -O $HADOOP_HOME/src.tgz $HADOOP_TGZ_URL; \
    tar -xf $HADOOP_HOME/src.tgz --strip-components=1 -C $HADOOP_HOME; \
    rm $HADOOP_HOME/src.tgz; \
    chown -R root:root $HADOOP_HOME; \
RUN mkdir -p $HADOOP_HOME/hdfs/name/ && mkdir -p $HADOOP_HOME/hdfs/data/
# clean trash file or dir
RUN rm -rf $HADOOP_HOME/share/doc/;
COPY docker-entrypoint.sh /
EXPOSE 22 9870 9000
ENTRYPOINT ["/docker-entrypoint.sh"]

docker-entrypoint.sh

#!/bin/bash
set -e
service ssh start
hdfs_dir=$HADOOP_HOME/hdfs/
if [ $HADOOP_NODE_TYPE = "datanode" ]; then
  echo -e "\033[32m start datanode \033[0m"
  $HADOOP_HOME/bin/hdfs datanode -regular
fi
if [ $HADOOP_NODE_TYPE = "namenode" ]; then
  if [ -z $(ls -A ${hdfs_dir}) ]; then
    echo -e "\033[32m start hdfs namenode format \033[0m"
    $HADOOP_HOME/bin/hdfs namenode -format
  fi
  echo -e "\033[32m start hdfs namenode \033[0m"
  $HADOOP_HOME/bin/hdfs namenode

pod template

apiVersion: v1
kind: ConfigMap
metadata:
  name: hadoop
  namespace: big-data
  labels:
    app: hadoop
data:
  hadoop-env.sh: |
    export HDFS_DATANODE_USER=root
    export HDFS_NAMENODE_USER=root
    export HDFS_SECONDARYNAMENODE_USER=root
    export JAVA_HOME=/usr/local/openjdk-8
    export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}
    export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib/native"
  core-site.xml: |
    <?xml version="1.0" encoding="UTF-8"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl" rel="external nofollow"  rel="external nofollow" ?>
    <configuration>
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://hadoop-master:9000</value>
        </property>
        <property>
            <name>dfs.namenode.rpc-bind-host</name>
            <value>0.0.0.0</value>
        </property>
    </configuration>
  hdfs-site.xml: |
    <?xml version="1.0" encoding="UTF-8"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl" rel="external nofollow"  rel="external nofollow" ?>
    <configuration>
        <property>
            <name>dfs.namenode.name.dir</name>
            <value>file:///opt/hadoop/hdfs/name</value>
        </property>
        <property>
            <name>dfs.datanode.data.dir</name>
            <value>file:///opt/hadoop/hdfs/data</value>
        </property>
        <property>
            <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
            <value>false</value>
        </property>
        <property>
            <name>dfs.replication</name>
            <value>1</value>
        </property>
    </configuration>
---
# namenode svc
apiVersion: v1
kind: Service
metadata:
  name: hadoop-master
  namespace: big-data
spec:
  selector:
    app: hadoop-namenode
  type: NodePort
  ports:
    - name: rpc
      port: 9000
      targetPort: 9000
    - name: http
      port: 9870
      targetPort: 9870
      nodePort: 9870
# namenode pod
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hadoop-namenode
  namespace: big-data
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: hadoop-namenode
  template:
    metadata:
      labels:
        app: hadoop-namenode
    spec:
      volumes:
        - name: hadoop-env
          configMap:
            name: hadoop
            items:
              - key: hadoop-env.sh
                path: hadoop-env.sh
        - name: core-site
          configMap:
            name: hadoop
            items:
              - key: core-site.xml
                path: core-site.xml
        - name: hdfs-site
          configMap:
            name: hadoop
            items:
              - key: hdfs-site.xml
                path: hdfs-site.xml
        - name: hadoop-data
          persistentVolumeClaim:
            claimName: data-hadoop-namenode
      containers:
        - name: hadoop
          image: registry:5000/hadoop
          imagePullPolicy: Always
          ports:
            - containerPort: 22
            - containerPort: 9000
            - containerPort: 9870
          volumeMounts:
            - name: hadoop-env
              mountPath: /opt/hadoop/etc/hadoop/hadoop-env.sh
              subPath: hadoop-env.sh
            - name: core-site
              mountPath: /opt/hadoop/etc/hadoop/core-site.xml
              subPath: core-site.xml
            - name: hdfs-site
              mountPath: /opt/hadoop/etc/hadoop/hdfs-site.xml
              subPath: hdfs-site.xml
            - name: hadoop-data
              mountPath: /opt/hadoop/hdfs/
              subPath: hdfs
            - name: hadoop-data
              mountPath: /opt/hadoop/logs/
              subPath: logs
          env:
            - name: HADOOP_NODE_TYPE
              value: namenode
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-hadoop-namenode
  namespace: big-data
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 256Gi
  storageClassName: "managed-nfs-storage"
# datanode pod
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: hadoop-datanode
  namespace: big-data
spec:
  replicas: 2
  selector:
    matchLabels:
      app: hadoop-datanode
  serviceName: hadoop-datanode
  template:
    metadata:
      labels:
        app: hadoop-datanode
    spec:
      volumes:
        - name: hadoop-env
          configMap:
            name: hadoop
            items:
              - key: hadoop-env.sh
                path: hadoop-env.sh
        - name: core-site
          configMap:
            name: hadoop
            items:
              - key: core-site.xml
                path: core-site.xml
        - name: hdfs-site
          configMap:
            name: hadoop
            items:
              - key: hdfs-site.xml
                path: hdfs-site.xml
      containers:
        - name: hadoop
          image: registry:5000/hadoop
          imagePullPolicy: Always
          ports:
            - containerPort: 22
            - containerPort: 9000
            - containerPort: 9870
          volumeMounts:
            - name: hadoop-env
              mountPath: /opt/hadoop/etc/hadoop/hadoop-env.sh
              subPath: hadoop-env.sh
            - name: core-site
              mountPath: /opt/hadoop/etc/hadoop/core-site.xml
              subPath: core-site.xml
            - name: hdfs-site
              mountPath: /opt/hadoop/etc/hadoop/hdfs-site.xml
              subPath: hdfs-site.xml
            - name: data
              mountPath: /opt/hadoop/hdfs/
              subPath: hdfs
            - name: data
              mountPath: /opt/hadoop/logs/
              subPath: logs
          env:
            - name: HADOOP_NODE_TYPE
              value: datanode
  volumeClaimTemplates:
    - metadata:
        name: data
        namespace: big-data
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 256Gi
        storageClassName: "managed-nfs-storage"

到此這篇關(guān)于k8s-部署HADOOP-3.2.2(HDFS)的文章就介紹到這了,更多相關(guān)k8s部署hadoop內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!

相關(guān)文章

  • 使用sealos快速搭建K8s集群環(huán)境的過程

    使用sealos快速搭建K8s集群環(huán)境的過程

    這篇文章主要介紹了使用sealos快速搭建K8s集群環(huán)境,主要包括sealos安裝方法,虛擬機(jī)設(shè)置方法,本文給大家介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或工作具有一定的參考借鑒價(jià)值,需要的朋友可以參考下
    2022-09-09
  • Kubernetes有狀態(tài)應(yīng)用管理StatefulSet使用詳解

    Kubernetes有狀態(tài)應(yīng)用管理StatefulSet使用詳解

    這篇文章主要為大家介紹了Kubernetes有狀態(tài)應(yīng)用管理StatefulSet使用詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪
    2022-11-11
  • Kubernetes應(yīng)用服務(wù)質(zhì)量管理詳解

    Kubernetes應(yīng)用服務(wù)質(zhì)量管理詳解

    這篇文章主要為大家介紹了Kubernetes應(yīng)用服務(wù)質(zhì)量管理詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪
    2022-11-11
  • Spark實(shí)現(xiàn)K-Means算法代碼示例

    Spark實(shí)現(xiàn)K-Means算法代碼示例

    這篇文章主要介紹了Spark實(shí)現(xiàn)K-Means算法代碼示例,簡單介紹了K-Means算法及其原理,然后通過具體實(shí)例向大家展示了用spark實(shí)現(xiàn)K-Means算法,需要的朋友可以參考下。
    2017-10-10
  • IPVS下CoreDNS滾動(dòng)更新解析失敗原理探究

    IPVS下CoreDNS滾動(dòng)更新解析失敗原理探究

    這篇文章主要為大家介紹了IPVS下CoreDNS滾動(dòng)更新解析失敗原理探究,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪
    2023-03-03
  • kubernetes k8s 存儲(chǔ)動(dòng)態(tài)掛載配置詳解

    kubernetes k8s 存儲(chǔ)動(dòng)態(tài)掛載配置詳解

    這篇文章主要為大家介紹了kubernetes k8s 存儲(chǔ)動(dòng)態(tài)掛載配置詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪
    2022-11-11
  • 詳解kubelet?創(chuàng)建pod流程代碼圖解及日志說明

    詳解kubelet?創(chuàng)建pod流程代碼圖解及日志說明

    這篇文章主要為大家介紹了詳解kubelet?創(chuàng)建pod流程代碼圖解及日志說明,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪
    2022-09-09
  • Rancher部署配置開源Rainbond云原生應(yīng)用管理平臺(tái)

    Rancher部署配置開源Rainbond云原生應(yīng)用管理平臺(tái)

    這篇文章主要為大家介紹了Rancher部署配置開源Rainbond云原生應(yīng)用管理平臺(tái),有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪
    2022-04-04
  • K8S節(jié)點(diǎn)本地存儲(chǔ)被撐爆問題徹底解決方法

    K8S節(jié)點(diǎn)本地存儲(chǔ)被撐爆問題徹底解決方法

    這篇文章主要為大家介紹了K8S節(jié)點(diǎn)本地存儲(chǔ)被撐爆問題徹底解決方法,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪
    2022-11-11
  • helmfile聲明式部署Helm?Chart使用詳解

    helmfile聲明式部署Helm?Chart使用詳解

    這篇文章主要為大家介紹了helmfile聲明式部署Helm?Chart使用詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪
    2023-02-02

最新評(píng)論