码迷,mamicode.com
首页 > 其他好文 > 详细

Hadoop3.2.0+Centos7三节点完全分布式安装配置

时间:2019-12-03 18:08:46      阅读:106      评论:0      收藏:0      [点我收藏+]

标签:worker   hadoop   centos7   period   conda   its   edits   hdfs   ica   

一、环境准备

①准备三台虚拟机,配置静态IP

②先修改主机名(每个节点统一命名规范)

vim /etc/hostname
master  #重启生效

配置DNS每个节点

vim /etc/hosts
192.168.60.121 master192.168.60.122 salve1
192.168.60.123 salve2

永久关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

配置免密登录

ssh-keygen -t rsa  #一路回车即可
cd 到 .ssh
cp id_rsa.pub authorized_keys    #生成公钥

将公钥拷贝到节点

scp authorized_keys root@slave1:/root/.ssh/
scp authorized_keys root@slave2:/root/.ssh/

登录到hadoop2主机cd到.ssh

cat id_isa.pub >> authorized_keys  #使用cat追加方式

登录到2号主机重复操作,再将公钥拷贝到三台主机上

二、配置jdk1.8

将jdk解压到自定义目录

vim /etc/profile #添加如下信息
export JAVA_HOME=jdk安装目录
export CLASSPATH=$JAVA_HOME/lib/
export PATH=$PATH:JAVA_HOME/bin
再保存执行  
#source /etc/profile
验证
#java -version

三、Hadoop环境配置

解压并移动到自定义位置

vim /etc/profile
export HADOOP_HOME=Hadoop的安装目录
export PATH=$PAHT:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/Hadoop
更新资源使生效
source /etc/profil

首先在hadoop-env.sh、mapred-env.sh、yarn-env.sh文件中指定JDK的路径

export JAVA_HOME=jdk安装目录

 配置core-site.xml

<configuration>
  <property>
    <name>fs.checkpoint.period</name>
    <value>3600</value>
  </property>
  <property>
    <name>fs.checkpoint.size</name>
    <value>67108864</value>
  </property>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://node1:9000</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>file:/usr/local/data/hdfs/tmp</value>
  </property>
  <property>
    <name>hadoop.http.staticuser.user</name>
    <value>root</value>
  </property>
</configuration>

配置hdfs-site.xml

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>2</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/usr/local/data/hdfs/name</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/usr/local/data/hdfs/data</value>
  </property>
  <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>node1:50090</value>
  </property>
  <property>
    <name>dfs.namenode.http-address</name>
    <value>node1:50070</value>
  </property> 
  <property>
    <name>dfs.namenode.checkpoint.dir</name>
    <value>file:/usr/local/data/hdfs/checkpoint</value>
  </property>
  <property>
    <name>dfs.namenode.checkpoint.edits.dir</name>
    <value>file:/usr/local/data/hdfs/edits</value>
  </property>
</configuration>

配置yarn-site.xml

<property>
  <name>yarn.resourcemanager.hostname</name>
  <value>node1</value>
</property>
<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
</property>
<property>
  <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
  <value>org.apache.hadoop.mapred.ShuffleHandle</value>
</property>
<property>
  <name>yarn.resourcemanager.resource-tarcker.address</name>
  <value>node1:8025</value>
</property>
<property>
  <name>yarn.resourcemanager.scheduler.address</name>
  <value>node1:8030</value>
</property>
<property>
  <name>yarn.resourcemanager.address</name>
  <value>node1:8040</value>
</property>
<property>
  <name>yarn.resourcemanager.admin.address</name>
  <value>node1:8033</value>
</property>
<property>
  <name>yarn.resourcemanager.webapp.address</name>
  <value>node1:8088</value>
</property>

配置mapred-site.xml

<configuration>
  <property>
    <name>mapreduce.framework.name</name>     <value>yarn</value>   </property>   <property>     <name>mapred.job.tarcker</name>     <value>node1:10020</value>   </property>   <property>     <name>mapreduce.jobhistory.webapp.address</name>     <value>node1:19888</value>   </property> </configuration>

修改workers文件,删除localhost,并换成

slave1
slave2

新建文件夹(不建立应该也是可以的)

mkdir /usr/local/data/hdfs/tmp
mkdir /usr/local/data/hdfs/name
mkdir /usr/local/data/hdfs/data
mkdir /usr/local/data/hdfs/checkpoint
mkdir /usr/local/data/hdfs/edits

复制Hadoop文件到节点

scp -r /目的目录 hadoop2:./目的目录

Hadoop安装完成,格式化Namenode

cd到bin目录./Hdfs namenode -format

启动Hadoop

cd到sbin下 ./start-all.sh

OVER。。。

Hadoop3.2.0+Centos7三节点完全分布式安装配置

标签:worker   hadoop   centos7   period   conda   its   edits   hdfs   ica   

原文地址:https://www.cnblogs.com/jake-jin/p/11978376.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!