主頁(yè) > 知識(shí)庫(kù) > ol7.7安裝部署4節(jié)點(diǎn)hadoop 3.2.1分布式集群學(xué)習(xí)環(huán)境的詳細(xì)教程

ol7.7安裝部署4節(jié)點(diǎn)hadoop 3.2.1分布式集群學(xué)習(xí)環(huán)境的詳細(xì)教程

熱門標(biāo)簽:地圖標(biāo)注資源分享注冊(cè) 高德地圖標(biāo)注公司位置需要錢嗎 襄陽(yáng)外呼增值業(yè)務(wù)線路解決方案 海南人工外呼系統(tǒng)哪家好 慶陽(yáng)外呼系統(tǒng)定制開(kāi)發(fā) 廊坊地圖標(biāo)注申請(qǐng)入口 合肥阿里辦理400電話號(hào) 北京外呼系統(tǒng)咨詢電話 怎么去掉地圖標(biāo)注文字

準(zhǔn)備4臺(tái)虛擬機(jī),安裝好ol7.7,分配固定ip192.168.168.11 12 13 14,其中192.168.168.11作為master,其他3個(gè)作為slave,主節(jié)點(diǎn)也同時(shí)作為namenode的同時(shí)也是datanode,192.168.168.14作為datanode的同時(shí)也作為secondary namenodes

首先修改/etc/hostname將主機(jī)名改為master、slave1、slave2、slave3

然后修改/etc/hosts文件添加

192.168.168.11 master
192.168.168.12 slave1
192.168.168.13 slave2
192.168.168.14 slave3

然后卸載自帶openjdk改為sun jdk,參考https://www.jb51.net/article/190489.htm

配置無(wú)密碼登陸本機(jī)

ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys

配置互信

master上把公鑰傳輸給各個(gè)slave

scp ~/.ssh/id_rsa.pub hadoop@slave1:/home/hadoop/
scp ~/.ssh/id_rsa.pub hadoop@slave2:/home/hadoop/
scp ~/.ssh/id_rsa.pub hadoop@slave3:/home/hadoop/

在slave主機(jī)上將master的公鑰加入各自的節(jié)點(diǎn)上

cat ~/id_rsa.pub >> ~/.ssh/authorized_keys

master上安裝hadoop

sudo tar -xzvf ~/hadoop-3.2.1.tar.gz -C /usr/local
sudo mv hadoop-3.2.1-src/ ./hadoop
sudo chown -R hadoop: ./hadoop

.bashrc添加并使之生效

export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

集群配置,/usr/local/hadoop/etc/hadoop目錄中有配置文件:

修改core-site.xml

configuration>
 property>
 name>hadoop.tmp.dir/name>
 value>file:/usr/local/hadoop/tmp/value>
 description>Abase for other temporary directories./description>
 /property>
 property>
 name>fs.defaultFS/name>
 value>hdfs://master:9000/value>
 /property>
/configuration>

修改hdfs-site.xml

configuration>
 property>
 name>dfs.namenode.name.dir/name>
 value>/home/hadoop/data/nameNode/value>
 /property>
 
 property>
 name>dfs.datanode.data.dir/name>
 value>/home/hadoop/data/dataNode/value>
 /property>
 
 property>
 name>dfs.replication/name>
 value>3/value>
 /property>
 property>
 name>dfs.secondary.http.address/name>
 value>slave3:50090/value>
 /property>
/configuration>

修改mapred-site.xml

configuration>
 property>
 name>mapreduce.framework.name/name>
 value>yarn/value>
 /property>
 
 property>
 name>yarn.app.mapreduce.am.env/name>
 value>HADOOP_MAPRED_HOME=$HADOOP_HOME/value>
 /property>
 
 property>
 name>mapreduce.map.env/name>
 value>HADOOP_MAPRED_HOME=$HADOOP_HOME/value>
 /property>
 
 property>
 name>mapreduce.reduce.env/name>
 value>HADOOP_MAPRED_HOME=$HADOOP_HOME/value>
 /property>
/configuration>

修改yarn-site.xml

configuration>
 property>
  name>yarn.resourcemanager.hostname/name>
  value>master/value>
 /property>
 property>
  name>yarn.nodemanager.aux-services/name>
  value>mapreduce_shuffle/value>
 /property>
/configuration>

修改hadoop-env.sh找到JAVA_HOME的配置將目錄修改為

export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_191

修改workers

[hadoop@master /usr/local/hadoop/etc/hadoop]$ vim workers
master
slave1
slave2
slave3

最后將配置好的/usr/local/hadoop文件夾復(fù)制到其他節(jié)點(diǎn)

sudo scp -r /usr/local/hadoop/ slave1:/usr/local/
sudo scp -r /usr/local/hadoop/ slave2:/usr/local/
sudo scp -r /usr/local/hadoop/ slave3:/usr/local/

并且把文件夾owner改為hadoop

sudo systemctl stop firewalld
sudo systemctl disable firewalld

關(guān)閉防火墻

格式化hdfs,首次運(yùn)行前運(yùn)行,以后不用,在任意節(jié)點(diǎn)執(zhí)行都可以/usr/local/hadoop/bin/hadoop namenode –format

看到這個(gè)successfuly formatted就是表示成功

start-dfs.sh啟動(dòng)集群hdfs

jps命令查看運(yùn)行情況

通過(guò)master的9870端口可以網(wǎng)頁(yè)監(jiān)控http://192.168.168.11:9870/

也可以通過(guò)命令行查看集群狀態(tài)hadoop dfsadmin -report

[hadoop@master ~]$ hadoop dfsadmin -report
WARNING: Use of this script to execute dfsadmin is deprecated.
WARNING: Attempting to execute replacement "hdfs dfsadmin" instead.
 
Configured Capacity: 201731358720 (187.88 GB)
Present Capacity: 162921230336 (151.73 GB)
DFS Remaining: 162921181184 (151.73 GB)
DFS Used: 49152 (48 KB)
DFS Used%: 0.00%
Replicated Blocks:
 Under replicated blocks: 0
 Blocks with corrupt replicas: 0
 Missing blocks: 0
 Missing blocks (with replication factor 1): 0
 Low redundancy blocks with highest priority to recover: 0
 Pending deletion blocks: 0
Erasure Coded Block Groups:
 Low redundancy block groups: 0
 Block groups with corrupt internal blocks: 0
 Missing block groups: 0
 Low redundancy blocks with highest priority to recover: 0
 Pending deletion blocks: 0
 
-------------------------------------------------
Live datanodes (4):
 
Name: 192.168.168.11:9866 (master)
Hostname: master
Decommission Status : Normal
Configured Capacity: 50432839680 (46.97 GB)
DFS Used: 12288 (12 KB)
Non DFS Used: 9796546560 (9.12 GB)
DFS Remaining: 40636280832 (37.85 GB)
DFS Used%: 0.00%
DFS Remaining%: 80.58%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Jul 03 11:14:44 CST 2020
Last Block Report: Fri Jul 03 11:10:35 CST 2020
Num of Blocks: 0
 
 
Name: 192.168.168.12:9866 (slave1)
Hostname: slave1
Decommission Status : Normal
Configured Capacity: 50432839680 (46.97 GB)
DFS Used: 12288 (12 KB)
Non DFS Used: 9710411776 (9.04 GB)
DFS Remaining: 40722415616 (37.93 GB)
DFS Used%: 0.00%
DFS Remaining%: 80.75%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Jul 03 11:14:44 CST 2020
Last Block Report: Fri Jul 03 11:10:35 CST 2020
Num of Blocks: 0
 
 
Name: 192.168.168.13:9866 (slave2)
Hostname: slave2
Decommission Status : Normal
Configured Capacity: 50432839680 (46.97 GB)
DFS Used: 12288 (12 KB)
Non DFS Used: 9657286656 (8.99 GB)
DFS Remaining: 40775540736 (37.98 GB)
DFS Used%: 0.00%
DFS Remaining%: 80.85%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Jul 03 11:14:44 CST 2020
Last Block Report: Fri Jul 03 11:10:35 CST 2020
Num of Blocks: 0
 
 
Name: 192.168.168.14:9866 (slave3)
Hostname: slave3
Decommission Status : Normal
Configured Capacity: 50432839680 (46.97 GB)
DFS Used: 12288 (12 KB)
Non DFS Used: 9645883392 (8.98 GB)
DFS Remaining: 40786944000 (37.99 GB)
DFS Used%: 0.00%
DFS Remaining%: 80.87%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Jul 03 11:14:44 CST 2020
Last Block Report: Fri Jul 03 11:10:35 CST 2020
Num of Blocks: 0
 
 
[hadoop@master ~]$

start-yarn.sh可以開(kāi)啟yarn,可以通過(guò)master8088端口監(jiān)控

啟動(dòng)集群命令,可以同時(shí)開(kāi)啟hdfs和yarn /usr/local/hadoop/sbin/start-all.sh

停止集群命令 /usr/local/hadoop/sbin/stop-all.sh

就這樣,記錄過(guò)程,以備后查

到此這篇關(guān)于ol7.7安裝部署4節(jié)點(diǎn)hadoop 3.2.1分布式集群學(xué)習(xí)環(huán)境的文章就介紹到這了,更多相關(guān)ol7.7安裝部署hadoop分布式集群內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!

您可能感興趣的文章:
  • Linux中安裝配置hadoop集群詳細(xì)步驟
  • 詳解從 0 開(kāi)始使用 Docker 快速搭建 Hadoop 集群環(huán)境
  • 在Hadoop集群環(huán)境中為MySQL安裝配置Sqoop的教程
  • java結(jié)合HADOOP集群文件上傳下載
  • Hadoop單機(jī)版和全分布式(集群)安裝

標(biāo)簽:哈密 綿陽(yáng) 鶴崗 臺(tái)州 鎮(zhèn)江 平頂山 商丘 株洲

巨人網(wǎng)絡(luò)通訊聲明:本文標(biāo)題《ol7.7安裝部署4節(jié)點(diǎn)hadoop 3.2.1分布式集群學(xué)習(xí)環(huán)境的詳細(xì)教程》,本文關(guān)鍵詞  ol7.7,安裝,部署,4節(jié)點(diǎn),hadoop,;如發(fā)現(xiàn)本文內(nèi)容存在版權(quán)問(wèn)題,煩請(qǐng)?zhí)峁┫嚓P(guān)信息告之我們,我們將及時(shí)溝通與處理。本站內(nèi)容系統(tǒng)采集于網(wǎng)絡(luò),涉及言論、版權(quán)與本站無(wú)關(guān)。
  • 相關(guān)文章
  • 下面列出與本文章《ol7.7安裝部署4節(jié)點(diǎn)hadoop 3.2.1分布式集群學(xué)習(xí)環(huán)境的詳細(xì)教程》相關(guān)的同類信息!
  • 本頁(yè)收集關(guān)于ol7.7安裝部署4節(jié)點(diǎn)hadoop 3.2.1分布式集群學(xué)習(xí)環(huán)境的詳細(xì)教程的相關(guān)信息資訊供網(wǎng)民參考!
  • 推薦文章