码迷,mamicode.com
首页 > 其他好文 > 详细

hadoop中secondarynamenode节点添加方法

时间:2014-12-11 15:56:46      阅读:224      评论:0      收藏:0      [点我收藏+]

标签:hadoop   hadoop集群   

当时,hadoop已经安装成功,但是secondarynamenode没有启动

后来经过研究,原来是配置的目录有问题

首先修改一下shell文件

文件路径:/home/work/hadoop/bin

原来:master  现在:secondarynamenode 

[work@master bin]$ cat start-dfs.sh
#!/usr/bin/env bash


# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.




# Start hadoop dfs daemons.
# Optinally upgrade or rollback dfs state.
# Run this on master node.


usage="Usage: start-dfs.sh [-upgrade|-rollback]"


bin=`dirname "$0"`
bin=`cd "$bin"; pwd`


if [ -e "$bin/../libexec/hadoop-config.sh" ]; then
  . "$bin"/../libexec/hadoop-config.sh
else
  . "$bin/hadoop-config.sh"
fi


# get arguments
if [ $# -ge 1 ]; then
nameStartOpt=$1
shift
case $nameStartOpt in
 (-upgrade)
  ;;
 (-rollback) 
  dataStartOpt=$nameStartOpt
  ;;
 (*)
 echo $usage
 exit 1
   ;;
esac
fi


# start dfs daemons
# start namenode after datanodes, to minimize time namenode is up w/o data
# note: datanodes will log connection errors until namenode starts
"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode $nameStartOpt
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR start datanode $dataStartOpt
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts secondarynamenode start secondarynamenode
[work@master bin]$ 


停止的shell文件也要修改:

原来:master  现在:secondarynamenode 

[work@master bin]$ cat start-dfs.sh
#!/usr/bin/env bash


# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.




# Start hadoop dfs daemons.
# Optinally upgrade or rollback dfs state.
# Run this on master node.


usage="Usage: start-dfs.sh [-upgrade|-rollback]"


bin=`dirname "$0"`
bin=`cd "$bin"; pwd`


if [ -e "$bin/../libexec/hadoop-config.sh" ]; then
  . "$bin"/../libexec/hadoop-config.sh
else
  . "$bin/hadoop-config.sh"
fi


# get arguments
if [ $# -ge 1 ]; then
nameStartOpt=$1
shift
case $nameStartOpt in
 (-upgrade)
  ;;
 (-rollback) 
  dataStartOpt=$nameStartOpt
  ;;
 (*)
 echo $usage
 exit 1
   ;;
esac
fi


# start dfs daemons
# start namenode after datanodes, to minimize time namenode is up w/o data
# note: datanodes will log connection errors until namenode starts
"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode $nameStartOpt
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR start datanode $dataStartOpt
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts secondarynamenode start secondarynamenode
[work@master bin]$ cat stop-dfs.sh
#!/usr/bin/env bash


# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.




# Stop hadoop DFS daemons.  Run this on master node.


bin=`dirname "$0"`
bin=`cd "$bin"; pwd`


if [ -e "$bin/../libexec/hadoop-config.sh" ]; then
  . "$bin"/../libexec/hadoop-config.sh
else
  . "$bin/hadoop-config.sh"
fi


"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop namenode
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR stop datanode
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts secondarynamenode stop secondarynamenode


[work@master bin]$ 


第二点修改文件内容

文件路径:/home/work/hadoop/conf

从节点原来是:slaves里面有node1节点

修改后为只有node2和node3.

[work@master conf]$ cat slaves
node2
node3
[work@master conf]$ 

另外在追加一个secondarynamenode文件

里面单独放node1

[work@master conf]$ cat secondarynamenode
node1
[work@master conf]$ 

到此位置:secondarynamenode配置成功。


现在我把成功后运行的结果状态贴一下:

首先我是1个master,secondarynamenode是node1, node2和node3是datanode。

master情况:

[work@master conf]$ jps
13338 NameNode
13884 Jps
13554 JobTracker
[work@master conf]$ 


node1情况:

[work@node1 ~]$ jps
9772 SecondaryNameNode
10071 Jps
[work@node1 ~]$ 


node2情况:

[work@node2 ~]$ jps
22897 TaskTracker
22767 DataNode
23234 Jps
[work@node2 ~]$ 


node3情况:

[work@node3 ~]$ jps
3457 TaskTracker
3327 DataNode
3806 Jps
[work@node3 ~]$ 

hadoop中secondarynamenode节点添加方法

标签:hadoop   hadoop集群   

原文地址:http://blog.csdn.net/lzq123_1/article/details/41866461

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!