ol7.7安裝部署4節(jié)點hadoop 3.2.1分布式集群學習環(huán)境的詳細教程
準備4臺虛擬機,安裝好ol7.7,分配固定ip192.168.168.11 12 13 14,其中192.168.168.11作為master,其他3個作為slave,主節(jié)點也同時作為namenode的同時也是datanode,192.168.168.14作為datanode的同時也作為secondary namenodes
首先修改/etc/hostname將主機名改為master、slave1、slave2、slave3
然后修改/etc/hosts文件添加
192.168.168.11 master 192.168.168.12 slave1 192.168.168.13 slave2 192.168.168.14 slave3
然后卸載自帶openjdk改為sun jdk,參考http://www.dbjr.com.cn/article/190489.htm
配置無密碼登陸本機
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys chmod 0600 ~/.ssh/authorized_keys
配置互信
master上把公鑰傳輸給各個slave
scp ~/.ssh/id_rsa.pub hadoop@slave1:/home/hadoop/ scp ~/.ssh/id_rsa.pub hadoop@slave2:/home/hadoop/ scp ~/.ssh/id_rsa.pub hadoop@slave3:/home/hadoop/
在slave主機上將master的公鑰加入各自的節(jié)點上
cat ~/id_rsa.pub >> ~/.ssh/authorized_keys
master上安裝hadoop
sudo tar -xzvf ~/hadoop-3.2.1.tar.gz -C /usr/local sudo mv hadoop-3.2.1-src/ ./hadoop sudo chown -R hadoop: ./hadoop
.bashrc添加并使之生效
export HADOOP_HOME=/usr/local/hadoop export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
集群配置,/usr/local/hadoop/etc/hadoop目錄中有配置文件:
修改core-site.xml
<configuration> <property> <name>hadoop.tmp.dir</name> <value>file:/usr/local/hadoop/tmp</value> <description>Abase for other temporary directories.</description> </property> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> </configuration>
修改hdfs-site.xml
<configuration> <property> <name>dfs.namenode.name.dir</name> <value>/home/hadoop/data/nameNode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/home/hadoop/data/dataNode</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.secondary.http.address</name> <value>slave3:50090</value> </property> </configuration>
修改mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>yarn.app.mapreduce.am.env</name> <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value> </property> <property> <name>mapreduce.map.env</name> <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value> </property> <property> <name>mapreduce.reduce.env</name> <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value> </property> </configuration>
修改yarn-site.xml
<configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
修改hadoop-env.sh找到JAVA_HOME的配置將目錄修改為
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_191
修改workers
[hadoop@master /usr/local/hadoop/etc/hadoop]$ vim workers master slave1 slave2 slave3
最后將配置好的/usr/local/hadoop文件夾復制到其他節(jié)點
sudo scp -r /usr/local/hadoop/ slave1:/usr/local/ sudo scp -r /usr/local/hadoop/ slave2:/usr/local/ sudo scp -r /usr/local/hadoop/ slave3:/usr/local/
并且把文件夾owner改為hadoop
sudo systemctl stop firewalld sudo systemctl disable firewalld
關閉防火墻
格式化hdfs,首次運行前運行,以后不用,在任意節(jié)點執(zhí)行都可以/usr/local/hadoop/bin/hadoop namenode –format
看到這個successfuly formatted就是表示成功
start-dfs.sh啟動集群hdfs
jps命令查看運行情況
通過master的9870端口可以網(wǎng)頁監(jiān)控http://192.168.168.11:9870/
也可以通過命令行查看集群狀態(tài)hadoop dfsadmin -report
[hadoop@master ~]$ hadoop dfsadmin -report WARNING: Use of this script to execute dfsadmin is deprecated. WARNING: Attempting to execute replacement "hdfs dfsadmin" instead. Configured Capacity: 201731358720 (187.88 GB) Present Capacity: 162921230336 (151.73 GB) DFS Remaining: 162921181184 (151.73 GB) DFS Used: 49152 (48 KB) DFS Used%: 0.00% Replicated Blocks: Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 Low redundancy blocks with highest priority to recover: 0 Pending deletion blocks: 0 Erasure Coded Block Groups: Low redundancy block groups: 0 Block groups with corrupt internal blocks: 0 Missing block groups: 0 Low redundancy blocks with highest priority to recover: 0 Pending deletion blocks: 0 ------------------------------------------------- Live datanodes (4): Name: 192.168.168.11:9866 (master) Hostname: master Decommission Status : Normal Configured Capacity: 50432839680 (46.97 GB) DFS Used: 12288 (12 KB) Non DFS Used: 9796546560 (9.12 GB) DFS Remaining: 40636280832 (37.85 GB) DFS Used%: 0.00% DFS Remaining%: 80.58% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Fri Jul 03 11:14:44 CST 2020 Last Block Report: Fri Jul 03 11:10:35 CST 2020 Num of Blocks: 0 Name: 192.168.168.12:9866 (slave1) Hostname: slave1 Decommission Status : Normal Configured Capacity: 50432839680 (46.97 GB) DFS Used: 12288 (12 KB) Non DFS Used: 9710411776 (9.04 GB) DFS Remaining: 40722415616 (37.93 GB) DFS Used%: 0.00% DFS Remaining%: 80.75% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Fri Jul 03 11:14:44 CST 2020 Last Block Report: Fri Jul 03 11:10:35 CST 2020 Num of Blocks: 0 Name: 192.168.168.13:9866 (slave2) Hostname: slave2 Decommission Status : Normal Configured Capacity: 50432839680 (46.97 GB) DFS Used: 12288 (12 KB) Non DFS Used: 9657286656 (8.99 GB) DFS Remaining: 40775540736 (37.98 GB) DFS Used%: 0.00% DFS Remaining%: 80.85% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Fri Jul 03 11:14:44 CST 2020 Last Block Report: Fri Jul 03 11:10:35 CST 2020 Num of Blocks: 0 Name: 192.168.168.14:9866 (slave3) Hostname: slave3 Decommission Status : Normal Configured Capacity: 50432839680 (46.97 GB) DFS Used: 12288 (12 KB) Non DFS Used: 9645883392 (8.98 GB) DFS Remaining: 40786944000 (37.99 GB) DFS Used%: 0.00% DFS Remaining%: 80.87% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Fri Jul 03 11:14:44 CST 2020 Last Block Report: Fri Jul 03 11:10:35 CST 2020 Num of Blocks: 0 [hadoop@master ~]$
start-yarn.sh可以開啟yarn,可以通過master8088端口監(jiān)控
啟動集群命令,可以同時開啟hdfs和yarn /usr/local/hadoop/sbin/start-all.sh
停止集群命令 /usr/local/hadoop/sbin/stop-all.sh
就這樣,記錄過程,以備后查
到此這篇關于ol7.7安裝部署4節(jié)點hadoop 3.2.1分布式集群學習環(huán)境的文章就介紹到這了,更多相關ol7.7安裝部署hadoop分布式集群內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關文章希望大家以后多多支持腳本之家!
相關文章
bs架構(gòu)和cs架構(gòu)的區(qū)別_動力節(jié)點Java學院整理
這篇文章主要介紹了bs架構(gòu)和cs架構(gòu)的區(qū)別,小編覺得挺不錯的,現(xiàn)在分享給大家,也給大家做個參考。一起跟隨小編過來看看吧2017-07-07Notepad++文本比較插件Compare詳解(最新免費)
Notepad++是一款強大的文本編輯器,它提供了文件對比功能,可以幫助我們快速找出兩個文件之間的差異點,這篇文章主要介紹了Notepad++文本比較插件Compare詳解(最新免費),感興趣的朋友一起看看吧2024-01-01chatgpt成功解決Access denied 1020錯誤問題(最新推薦)
從前兩天網(wǎng)上開始一直開著的chatgpt網(wǎng)頁突然打不開了,提示1020錯誤,嘗試換了不同代理軟件或者代理地點仍然無法解決,這篇文章主要介紹了chatgpt成功解決Access denied 1020錯誤,需要的朋友可以參考下2023-05-05gradle+shell實現(xiàn)自動系統(tǒng)簽名
這篇文章主要介紹了gradle+shell實現(xiàn)自動系統(tǒng)簽名的相關資料,需要的朋友可以參考下2019-08-08Git基礎之git與SVN版本控制優(yōu)缺點區(qū)別分析
這篇文章主要為大家介紹了Git基礎之git與SVN優(yōu)缺點及區(qū)別分析,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進步,早日升職加薪2022-04-04解決appcode打開workspace無法找到xcodeproj項目文件問題
這篇文章主要介紹了解決appcode打開workspace無法找到xcodeproj項目文件問題,本文通過圖文并茂的形式給大家介紹的非常詳細,對大家的學習或工作具有一定的參考借鑒價值,需要的朋友可以參考下2021-02-02