码迷,mamicode.com
首页 > 其他好文 > 详细

用shell脚本自动化安装hadoop

时间:2014-05-26 20:42:07      阅读:436      评论:0      收藏:0      [点我收藏+]

标签:des   style   c   class   blog   code   

一、概述

1.1 简介

本文主要介绍怎样用 shell 实现 hadoop 的自动化安装。关于手动安装 hadoop 的步骤,可以查看以下链接:

http://www.cnblogs.com/13bear/articles/3700842.html

 

1.2 环境

OS:  CentOS release 6.4 (Final)

Hadoop:Apache hadoop V1.2.1

 

1.3 脚本下载

http://pan.baidu.com/s/1eQHyfZk

 

二、脚本综述

2.1 脚本目录列表

drwxr-xr-x. 2 root root 4096 Apr 25 17:43 conf           // 所有配置文件所在目录
-rwxr-xr-x. 1 root root 7099 Apr 30 10:39 hadoopAll.sh      // 在master、slave上安装hadoop,并进行相关配置
-rwxr-xr-x. 1 root root 1714 Apr 30 10:39 hadoopStatusCheck.sh // 检查hadoop目前运行状态
-rwxr-xr-x. 1 root root 1880 Apr 30 10:39 hostsAll.sh       // 在master、slave上的hosts文件中添加ip、域名映射
-rwxr-xr-x. 1 root root 1608 Apr 30 10:39 installAll.sh       // 整合所有单独的安装脚本,用于整个hadoop运行环境的部署
-rwxr-xr-x. 1 root root 2723 Apr 30 10:39 javaAll.sh         // 在master、slave上配置java环境
-rwxr-xr-x. 1 root root  958 Apr 30 10:39 pingAll.sh        // 检查master、slave主机的网络连通性
-rwxr-xr-x. 1 root root  622 Apr 30 10:39 ping.sh         // 检查单独主机的网络连通性,需要给出ip地址
-rwxr-xr-x. 1 root root 2263 Apr 30 10:39 sshAll.sh        // 配置master、slave主机间ssh无密码登录
drwxr-xr-x. 2 root root 4096 Apr 30 10:45 tools          // java、hadoop的安装包所在目录
-rwxr-xr-x. 1 root root 1431 Apr 30 10:39 unhadoopAll.sh      // 在master、slave上卸载hadoop
-rwxr-xr-x. 1 root root 1412 Apr 30 10:39 unhostsAll.sh       // 在master、slave上的hosts文件中清楚hadoop相关ip、域名映射条目
-rwxr-xr-x. 1 root root 1438 Apr 30 10:39 uninstallAll.sh       // 整合所有单独的卸载脚本,用于整个hadoop运行环境的清理
-rwxr-xr-x. 1 root root 1302 Apr 30 10:39 unjavaAll.sh       // 在master、slave上卸载java环境
-rwxr-xr-x. 1 root root 1575 Apr 30 10:39 useraddAll.sh       // 在master、slave上添加hadoop所用的账户
-rwxr-xr-x. 1 root root 1345 Apr 30 10:39 userdelAll.sh      // 在master、slave上删除hadoop所用的账户

./conf:
total 40
-rw-r--r--. 1 root root  345 Apr 25 17:43 core-site.xml       // hadoop的一个配置文件,更新后进行替换
-rw-r--r--. 1 root root 1310 Apr 23 17:32 env.config        // 该自动化安装脚本中环境变量文件,是核心的配置文件
-rw-r--r--. 1 root root  124 Apr 25 17:43 hadoop.env        // hadoop的环境变量,需要追加到/etc/profile文件中
-rw-r--r--. 1 root root   61 Apr 25 17:43 hadoop-env.sh      // hadoop的一个配置文件,更新后进行替换
-rw-r--r--. 1 root root   92 Apr 25 17:43 hdfs-site.xml       // hadoop的一个配置文件,更新后进行替换
-rw-r--r--. 1 root root  117 Apr 22 21:19 hosts           // hadoop主机hosts文件,需要追加到/etc/hosts文件中
-rw-r--r--. 1 root root  177 Apr 25 17:37 java.env         // java的环境变量,需要追加到/etc/profile文件中
-rw-r--r--. 1 root root  119 Apr 25 17:43 mapred-site.xml     // hadoop的一个配置文件,更新后进行替换
-rw-r--r--. 1 root root   14 Apr 22 21:19 masters          // hadoop的一个配置文件,用于指定master,更新后进行替换
-rw-r--r--. 1 root root   28 Apr 25 13:38 slaves           // hadoop的一个配置文件,用于指定slaves,更新后进行替换

./tools:
total 197320
-rw-r--r--. 1 root root  63851630 Apr 10 21:46 hadoop-1.2.1.tar.gz    // hadoop安装包
-rw-r--r--. 1 root root 138199690 Apr 10 21:46 jdk-7u51-linux-x64.tar.gz  // java安装包

 

三、脚本的配置文件

3.1 程序主配置文件——env.config

3.1.1 介绍

该配置文件中包含了自动化安装脚本中用到的所有变量。在使用前需要先对该文件的部分变量进行修改。

3.1.2 文件内容

bubuko.com,布布扣
 1 # Shell Script
 2 export SHELL_LOC=`pwd`
 3 export CONF_LOC=$SHELL_LOC/conf
 4 export TOOLS_LOC=$SHELL_LOC/tools
 5 
 6 # Hosts
 7 export HOSTS=`cat $CONF_LOC/masters $CONF_LOC/slaves`
 8 export MASTER=`cat $CONF_LOC/masters`
 9 export SLAVES=`cat $CONF_LOC/slaves`
10 export IP_LISTS=`cat $CONF_LOC/masters $CONF_LOC/slaves`
11 export IP_COUNT=`cat $CONF_LOC/masters $CONF_LOC/slaves | wc -l`
12 
13 # Users
14 export ROOT_USERPWD=Passw0rd
15 export ROOT_PROMPT=]#
16 export HADOOP_USERNAME=hadoop
17 export HADOOP_USERPWD=hadoop
18 export HADOOP_PROMPT=]$
19 export HADOOP_USERHOME=/home/$HADOOP_USERNAME
20 
21 # Expect Command
22 export EXPECTCHK=`rpm -qa expect | wc -l`
23 export EXPECT_TIMEOUT=4
24 export EXPECT_TIMEOUT_BIG=600
25 
26 # Java
27 export JAVA_PKGNAME=jdk-7u51-linux-x64.tar.gz
28 export JAVA_FOLDERNAME=jdk1.7.0_51    # 与实际 java 压缩包解压后的文件夹名称相同
29 export JAVA_INSTLOC=/usr/java
30 export JAVA_HOME=$JAVA_INSTLOC/$JAVA_FOLDERNAME
31 
32 # Hadoop
33 export HADOOP_PKGNAME=hadoop-1.2.1.tar.gz
34 export HADOOP_FOLDERNAME=hadoop-1.2.1    # 与实际 hadoop 压缩包解压后的文件夹名称相同
35 export HADOOP_INSTLOC=/usr    # 如果修改此值,请一并修改 conf/hadoop.env 文件中内容
36 export HADOOP_HOME=$HADOOP_INSTLOC/$HADOOP_FOLDERNAME
37 export HADOOP_CONFLOC=$HADOOP_HOME/conf
38 export HADOOP_TMP_DIR=$HADOOP_HOME/tmp
39 export HADOOP_DFS_REPLICATION=1
env.config

3.1.3 文件说明

  • Shell Script 部分:定义自动化脚本所在目录位置等信息,基本不用修改
  • Hosts 部分:定义了将要安装hadoop的所有主机ip地址,基本不用修改
  • Users 部分:定义了运行脚本主机root密码;定义hadoop账户信息;定义账户命令行提示信息
  • Expect Command 部分:定义了expect命令所需要的的一些变量,比如:是否安装了expect,expect命令的超时时间
  • Java 部分:定义了Java安装包信息、安装位置等信息
  • Hadoop 部分:定义了Hadoop安装包信息、安装位置信息、配置信息等

 

3.2 Java和hadoop环境变量配置文件——java.env 和 hadoop.env

3.2.1 介绍

这两个配置文件主要用来把java、hadoop的环境变量添加到/etc/profile中。

3.2.2 文件内容

bubuko.com,布布扣
1 #set java environment
2 export JAVA_HOME=/usr/java/jdk1.7.0_51
3 export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
4 export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
java.env
bubuko.com,布布扣
1 #set hadoop path
2 export HADOOP_HOME=/usr/hadoop-1.2.1
3 export PATH=$PATH:$HADOOP_HOME/bin
4 export HADOOP_HOME_WARN_SUPPRESS=1
hadoop.env

 

3.3 hosts配置文件——hosts

3.3.1 介绍

用于修改测试环境所有主机里/etc/hosts文件。在使用前需要先对该文件内容进行修改。

3.3.2 文件内容

192.168.1.144    zcmaster.hadoop    zcmaster
192.168.1.145    zcslave1.hadoop    zcslave1
192.168.1.146    zcslave2.hadoop    zcslave2

3.3.3 文件说明

  • 各主机名所在行一定要包含hadoop关键字,因为后续脚本中以此关键字先对/etc/hosts做匹配删除操作,然后进行hadoop主机信息追加
  • 该文件中的ip地址一定要和masters、slaves文件中的匹配

 

3.4 hadoop主机配置文件——masters 和 slaves

3.4.1 介绍

用于在配置hadoop的过程中,修改hadoop的masters、slaves文件。在使用前需要先对该文件内容进行修改。

3.4.2 文件内容

masters文件内容:

192.168.1.144

slaves文件内容:

192.168.1.145
192.168.1.146

 

3.4 hadoop运行环境配置文件——hadoop-env.sh、core-site.xml、hdfs-site.xml、mapred-site.xml

3.4.1 介绍

用于在配置hadoop的过程中,修改hadoop的hadoop-env.sh、core-site.xml、hdfs-site.xml、mapred-site.xml文件。安装脚本会根据env.config中的内容自动对这几个文件的内容自动进行更新,所以不需要手动进行修改。

3.4.2 文件内容

hadoop-env.sh文件内容:

#set java environment
export JAVA_HOME=/usr/java/jdk1.7.0_51

core-site.xml文件内容:

bubuko.com,布布扣
 1     <!-- temp folder properties -->
 2     <property>
 3         <name>hadoop.tmp.dir</name>
 4         <value>/usr/hadoop-1.2.1/tmp</value>
 5         <description>A base for other temporary directories.</description>
 6     </property>
 7     <!-- file system properties -->
 8     <property>
 9         <name>fs.default.name</name>
10         <value>hdfs://192.168.1.144:9000</value>
11     </property>
12 </configuration>
core-site.xml

hdfs-site.xml文件内容:

bubuko.com,布布扣
1     <property>
2         <name>dfs.replication</name>
3         <value>1</value>
4     </property>
5 </configuration>
hdfs-site.xml

mapred.xml文件内容:

bubuko.com,布布扣
1     <property>
2         <name>mapred.job.tracker</name>
3         <value>http://192.168.1.144:9001</value>
4     </property>
5 </configuration>
mapred-site.xml

 

四、安装脚本

4.1 主安装程序脚本——installAll.sh

4.1.1 介绍

整个环境的搭建分为若干部分,对各部分单独编写了对应的脚本,各脚本可以独立运行的(后续介绍),即:

  • 测试各主机连通性的pingAll.sh
  • 添加各主机hadoop用户的useraddAll.sh
  • 配置各主机间无密码访问的sshAll.sh
  • 配置各主机java环境的javaAll.sh
  • 配置各主机hosts文件的hostsAll.sh
  • 安装各主机hadoop的hadoopAll.sh

install.sh主要是组织这些单独的脚本进行整个hadoop环境的安装。

4.1.2 步骤概述

1)引入env.conf文件中的环境变量。

2)检查各主机网络是否连通,如果有一台不能ping通,则退出。

3)如果全部ping通,则进行整个环境的安装。

4)打印安装过程的开始时间、结束时间。

4.1.3 脚本内容

bubuko.com,布布扣
 1 #!/bin/bash
 2 
 3 # Purpose:    Install hadoop env on clean hosts.
 4 # Author:       13Bear
 5 # Update:       2014-04-25 17:18
 6 
 7 # Exit value:
 8 # 0     Install succeed
 9 # 1     Some hosts cant be reached
10 
11 # Import env variable
12 source ./conf/env.config
13 
14 # Start install
15 start_time=`date`
16 
17 # Make sure all the hosts can be reach
18 ./pingAll.sh
19 if [ $? -eq 1 ]; then
20     echo "###############################################"
21     echo -e "\033[31mSome nodes can‘t be reached! Please check them!\033[0m"
22     echo "###############################################"
23     exit 1
24 fi
25 # Start install
26 echo ""
27 echo "++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++"
28 echo "++++++++++++++++++++++++++++++++++++++++++++++++ Start Install +++++++++++++++++++++++++++++++++++++++++++++++"
29 echo "++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++"
30 echo ""
31 ./useraddAll.sh
32 ./sshAll.sh
33 ./javaAll.sh
34 ./hostsAll.sh
35 ./hadoopAll.sh
36 echo ""
37 echo "++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++"
38 echo "+++++++++++++++++++++++++++++++++++++++++++++++++ Eed Install ++++++++++++++++++++++++++++++++++++++++++++++++"
39 echo "++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++"
40 echo ""
41 
42 # 统计安装的开始、结束时间。
43 end_time=`date`
44 echo "###############################################"
45 echo "Start Time: $start_time"
46 echo "End Time:   $end_time"
47 echo "###############################################"
48 exit 0
installAll.sh

4.1.4 脚本说明

  •  安装完毕,$?=0

 

4.2 测试主机网络连通性脚本——pingAll.sh

4.2.1 介绍

进行hadoop自动化安装之前,需要测试所有主机网络的连通性,用pingAll.sh可以实现该功能。

4.2.2 步骤概述

1)依次对masters、slaves文件中指定的hadoop环境主机进行ping测试,检查网络是否连通,并打印每台主机是否连通。

2)如果全部ping通,$?=0,只要有一个ping不通,$?=1

4.2.3 脚本内容

bubuko.com,布布扣
 1 #!/bin/bash
 2 
 3 # Purpose:    Test whether the ips all can be pinged.
 4 # Author:       13Bear
 5 # Update:       2014-04-25 17:18
 6 
 7 # Exit value:
 8 # 0     All succeed
 9 # 1     Some failed
10 
11 # Import env variable
12 source ./conf/env.config
13 
14 echo ""
15 echo "+++++++++++++++++++++++++++++++++++++++++++++++ Ping Hosts Process +++++++++++++++++++++++++++++++++++++++++++++++"
16 echo ""
17 
18 # Some variable
19 SUCCEED_COUNT=0
20 FAILED_COUNT=0
21 
22 # Ping all the ips
23 echo "###############################################"
24 echo "Pinging all the hosts of hadoop environment."
25 echo "###############################################"
26 for ip in $IP_LISTS
27 do
28     ping -c 1 $ip  >/dev/null 2>&1
29     # Check the result
30     if [ $? -eq 0 ]; then
31         echo "$ip CAN be reached."
32         let SUCCEED_COUNT+=1
33     else
34         echo -e "\033[31m$ip CANNOT be reached.\033[0m"
35         let FAILED_COUNT+=1
36     fi
37 done
38 
39 # Result
40 if [ $SUCCEED_COUNT -eq $IP_COUNT ]; then
41     #echo "All succeed!"
42     exit 0
43 else
44     #echo "$FAILED_COUNT failed!"
45     exit 1
46 fi
pingAll.sh

4.2.4 脚本说明

  • 依次对即将要安装的所有主机进行ping,然后打印出结果(失败的红色字体打印)
  • 全部ping通时,$?=0,只要有一个ping不通,$?=1

 

4.3 添加运行hadoop管理员的脚本——useraddAll.sh

4.3.1 介绍

需要事先创建一个系统账户,用于对hadoop的管理,用useraddAll.sh可以实现该功能。

4.3.2 步骤概述

1)在每台主机上进行hadoop管理员账户的添加

2)添加前会先删除之前存在的hadoop管理员账户

3)添加完账户后,赋予账户密码

4.3.3 脚本内容

bubuko.com,布布扣
 1 #!/bin/bash
 2 
 3 # Purpose:    Config the hadoop envs ssh no password login
 4 # Author:       13Bear
 5 # Update:       2014-04-25 17:18
 6 
 7 # Exit value:
 8 # 0     All succeed
 9 # 1     Some failed
10 # 98    expect not install
11 # 99    Usage format error
12 
13 # Import env variable
14 source ./conf/env.config
15 
16 # Start
17 echo ""
18 echo "+++++++++++++++++++++++++++++++++++++++++++++++ Useradd Process +++++++++++++++++++++++++++++++++++++++++++++++" 
19 echo ""
20 
21 # Check the expect tools installation
22 if [ $EXPECTCHK != 1 ]; then
23         echo "###############################################"
24         echo "Please install the \"expect\" package first on all nodes to allow the script to run"
25         echo "yum -y install expect"
26         echo "###############################################"
27         exit 98
28 fi
29 
30 # Add hadoop user for every host
31 for host in $HOSTS
32 do
33     echo "###############################################"
34     echo "Adding hadoop user \"$HADOOP_USERNAME\" for host $host"
35     echo "###############################################"
36     expect -c "
37         set timeout $EXPECT_TIMEOUT
38         spawn ssh root@$host
39         expect \"yes/no\" {
40             send \"yes\r\"
41             expect \"password:\"
42             send -- \"$ROOT_USERPWD\r\"
43         } \"password:\" {
44             send -- \"$ROOT_USERPWD\r\"
45         }
46         expect \"$ROOT_PROMPT\"
47             send -- \"userdel -r $HADOOP_USERNAME; useradd -d $HADOOP_USERHOME $HADOOP_USERNAME\r\"
48         expect \"$ROOT_PROMPT\"
49             send \"passwd $HADOOP_USERNAME\r\"
50         expect \"New passwd:\"
51             send -- \"$HADOOP_USERPWD\r\"
52         expect \"Retype new password:\"
53             send -- \"$HADOOP_USERPWD\r\"
54         expect \"$ROOT_PROMPT\"
55     "
56     echo ""
57 done
useraddAll.sh

4.3.4 脚本说明

  • 账户和密码可以在env.config文件中事先指定

 

4.4 配置ssh无密码登录脚本——sshAll.sh

4.4.1 介绍

用于管理hadoop的账户需要在所有master、slave间ssh无密码登录,用sshAll.sh可以实现该功能。

4.4.2 步骤概述

1)在每台主机上用ssh-keygen命令为hadoop账户产生RSA key

2)用ssh-copy-id命令拷贝RSA公钥到其他主机中

3)这样便实现了主机间无密码ssh访问

4.4.3 脚本内容

bubuko.com,布布扣
 1 #!/bin/bash
 2 
 3 # Purpose:    Config the hadoop envs ssh no password login
 4 # Author:       13Bear
 5 # Update:       2014-04-25 17:18
 6 
 7 # Exit value:
 8 # 0     All succeed
 9 # 1     Some failed
10 # 98    expect not install
11 # 99    Usage format error
12 
13 # Import env variable
14 source ./conf/env.config
15 
16 # Start
17 echo ""
18 echo "+++++++++++++++++++++++++++++++++++++++++++++++ SSH Configuration Process +++++++++++++++++++++++++++++++++++++++++++++++"
19 echo ""
20 
21 # Check the expect tools installation
22 if [ $EXPECTCHK != 1 ]; then
23         echo "###############################################"
24         echo "Please install the \"expect\" package first on all nodes to allow the script to run"
25         echo "yum -y install expect"
26         echo "###############################################"
27         exit 98
28 fi
29 
30 # Generate RSA keys for every host
31 for host in $HOSTS
32 do
33     echo "###############################################"
34     echo "Generating RSA keys on host $host"
35     echo "###############################################"
36     expect -c "
37         set timeout $EXPECT_TIMEOUT
38         spawn ssh $HADOOP_USERNAME@$host
39         expect \"yes/no\" {
40             send \"yes\r\"
41             expect \"password:\"
42             send -- \"$HADOOP_USERPWD\r\"
43         } \"password:\" {
44             send -- \"$HADOOP_USERPWD\r\"
45         }
46         expect \"$HADOOP_PROMPT\"
47             send \"rm -rf ~/.ssh && ssh-keygen -t rsa -P ‘‘\r\"
48         expect \".ssh/id_rsa\"
49             send \"\r\"
50         expect \"$HADOOP_PROMPT\"
51     "
52     echo ""
53 done
54 
55 # Copy every hosts RSA pub key to ALL hosts
56 for host in $HOSTS
57 do
58     echo "###############################################"
59     echo "Copying $host RSA pub key to ALL hosts"
60     echo "###############################################"
61     for loop in $HOSTS
62     do
63         echo "==========> Copying $host RSA pub key to $loop"
64         expect -c "
65             set timeout $EXPECT_TIMEOUT
66             spawn ssh $HADOOP_USERNAME@$host
67             expect \"yes/no\" {
68                 send \"yes\r\"
69                 expect \"password:\"
70                 send -- \"$HADOOP_USERPWD\r\"
71             } \"password:\" {
72                 send -- \"$HADOOP_USERPWD\r\"
73             }
74             expect \"$HADOOP_PROMPT\"
75             send \"ssh-copy-id -i ~/.ssh/id_rsa.pub $HADOOP_USERNAME@$loop\r\"
76             expect \"yes/no\" {
77                 send \"yes\r\"
78                 expect \"password:\"
79                 send -- \"$HADOOP_USERPWD\r\"
80             } \"password:\" {
81                 send -- \"$HADOOP_USERPWD\r\"
82             }
83             expect \"$HADOOP_PROMPT\"
84         "
85         echo ""
86     done
87 done
sshAll.sh

4.4.4 脚本说明

  • 这一个脚本很重要,如果失败的话,那么hadoop的运行就会出问题。

 

4.5 配置java的脚本——javaAll.sh

4.5.1 介绍

hadoop的运行需要在每台主机上配置java环境,用javaAll.sh可以实现该功能。

4.5.2 步骤概述

1)拷贝java压缩包到主机上

2)解压压缩包

3)配置java运行环境(会事先删除旧的java运行环境)

4.5.3 脚本内容

bubuko.com,布布扣
 1 #!/bin/bash
 2 
 3 # PurPose:    Config java env
 4 # Author:    13Bear
 5 # Update:    2014-04-25 17:18
 6 
 7 # Exit value:
 8 # 0    All succeed
 9 # 1    Some failed
10 # 98    expect not install
11 # 99    Usage format error
12 
13 # Import env variable
14 source ./conf/env.config
15 
16 # Prepare works
17 # java.env update with variables in env.config
18 sed -i "s#JAVA_HOME=.*#JAVA_HOME=$JAVA_HOME#" $CONF_LOC/java.env
19 
20 # Start
21 echo ""
22 echo "+++++++++++++++++++++++++++++++++++++++++++++++ Java Configuration Process +++++++++++++++++++++++++++++++++++++++++++++++"
23 echo ""
24 
25 # Check the expect tools installation
26 if [ $EXPECTCHK != 1 ]; then
27     echo "###############################################"
28     echo "Please install the \"expect\" package first on all nodes to allow the script to run"
29     echo "yum -y install expect"
30     echo "###############################################"
31     exit 98
32 fi
33 
34 # Configure Java env for all hosts
35 for host in $HOSTS
36 do
37     echo "###############################################"
38     echo "Configuring Java env on $host"
39     echo "###############################################"
40     echo "==========> 1.Copying Java package to $host"
41     expect -c "
42         set timeout $EXPECT_TIMEOUT_BIG
43         spawn scp $TOOLS_LOC/$JAVA_PKGNAME root@$host:/usr
44         expect \"yes/no\" {
45             send \"yes\r\"
46             expect \"password:\"
47             send -- \"$ROOT_USERPWD\r\"
48         } \"password:\" {
49             send -- \"$ROOT_USERPWD\r\"
50         }
51         expect \"$ROOT_PROMPT\"
52     "
53     echo "==========> 2.Extracting Java file to $JAVA_INSTLOC on $host"
54     expect -c "
55         set timeout $EXPECT_TIMEOUT_BIG
56         spawn ssh root@$host
57         expect \"password:\"
58             send -- \"$ROOT_USERPWD\r\"
59         expect \"$ROOT_PROMPT\"
60             send -- \"rm -rf $JAVA_HOME \r\"
61         expect \"$ROOT_PROMPT\"
62             send -- \"mkdir $JAVA_INSTLOC 2>/dev/null \r\"
63         expect \"$ROOT_PROMPT\"
64             send -- \"tar -xzf /usr/$JAVA_PKGNAME -C $JAVA_INSTLOC \r\"
65         expect \"$ROOT_PROMPT\"
66             send -- \"chown -R root:root $JAVA_INSTLOC/$JAVA_FOLDERNAME \r\"
67         expect \"$ROOT_PROMPT\"
68             send -- \"rm -f /usr/$JAVA_PKGNAME \r\"
69         expect \"$ROOT_PROMPT\"
70     "
71     echo ""
72     echo "==========> 3.Configuring Java env varibales on $host"
73     expect -c "
74         set timeout $EXPECT_TIMEOUT_BIG
75         spawn scp $CONF_LOC/java.env root@$host:~/
76         expect \"password:\"
77         send -- \"$ROOT_USERPWD\r\"
78         expect \"$ROOT_PROMPT\"
79     "
80     expect -c "
81         set timeout $EXPECT_TIMEOUT_BIG
82         spawn ssh root@$host
83         expect \"password:\"
84             send -- \"$ROOT_USERPWD\r\"
85         expect \"$ROOT_PROMPT\"
86             send -- \"sed -i ‘/java/Id‘ /etc/profile \r\"
87         expect \"$ROOT_PROMPT\"
88             send -- \"cat ~/java.env >> /etc/profile \r\"
89         expect \"$ROOT_PROMPT\"
90             send -- \"rm -f ~/java.env \r\"
91         expect \"$ROOT_PROMPT\"
92             send -- \"source /etc/profile \r\"
93         expect \"$ROOT_PROMPT\"
94             send -- \"java -version \r\"
95         expect \"$ROOT_PROMPT\"
96     "
97     echo ""
98 done
javaAll.sh

4.5.4 脚本说明

  • hadoop的运行需要java,所以需要事先配置。

 

4.6 (可选)配置所有主机的hosts文件的脚本——hostsAll.sh

4.6.1 介绍

如果hadoop主机间的访问是通过主机名,那么需要对hosts文件进行配置,用hostsAll.sh可以实现该功能。我们是通过IP来进行hadoop主机间的访问的,所有这一步可以不做。

4.6.2 步骤概述

1)拷贝已经修改好的hosts文件到各主机上

2)删除/etc/hosts文件中原有的ip信息(如果有的话)

3)追加hosts文件内容到/etc/hosts文件中

4.6.3 脚本内容

bubuko.com,布布扣
 1 #!/bin/bash
 2 
 3 # PurPose:    Add hadoop hosts to /etc/hosts
 4 # Author:    13Bear
 5 # Update:    2014-04-25 17:18
 6 
 7 # Exit value:
 8 # 0    All succeed
 9 # 1    Some failed
10 # 98    expect not install
11 # 99    Usage format error
12 
13 # Import env variable
14 source ./conf/env.config
15 
16 # Start
17 echo ""
18 echo "+++++++++++++++++++++++++++++++++++++++++++++++ Add hadoop hosts to /etc/hosts Process +++++++++++++++++++++++++++++++++++++++++++++++"
19 echo ""
20 
21 # Check the expect tools installation
22 if [ $EXPECTCHK != 1 ]; then
23     echo "###############################################"
24     echo "Please install the \"expect\" package first on all nodes to allow the script to run"
25     echo "yum -y install expect"
26     echo "###############################################"
27     exit 98
28 fi
29 
30 # Add hadoop hosts to /etc/hosts for all hosts
31 for host in $HOSTS
32 do
33     echo "###############################################"
34     echo "Adding hadoop hosts to /etc/hosts on $host"
35     echo "###############################################"
36     echo "==========> 1.Copying hosts file to $host"
37     expect -c "
38         set timeout $EXPECT_TIMEOUT_BIG
39         spawn scp $CONF_LOC/hosts root@$host:~/
40         expect \"yes/no\" {
41             send \"yes\r\"
42             expect \"password:\"
43             send -- \"$ROOT_USERPWD\r\"
44         } \"password:\" {
45             send -- \"$ROOT_USERPWD\r\"
46         }
47         expect \"$ROOT_PROMPT\"
48     "
49     echo "==========> 2.Adding hosts file content to /etc/hosts on $host"
50     # sed 命令查找删除的关键字 hadoop 一定要在 hosts 文件中包含,不然 sed 删除失败
51     expect -c "
52         set timeout $EXPECT_TIMEOUT_BIG
53         spawn ssh root@$host
54         expect \"password:\"
55             send -- \"$ROOT_USERPWD\r\"
56         expect \"$ROOT_PROMPT\"
57             send -- \"sed -i ‘/hadoop/d‘ /etc/hosts \r\"
58         expect \"$ROOT_PROMPT\"
59             send -- \"cat ~/hosts >> /etc/hosts \r\"
60         expect \"$ROOT_PROMPT\"
61             send -- \"rm -f ~/hosts \r\"
62         expect \"$ROOT_PROMPT\"
63             send -- \"cat /etc/hosts \r\"
64         expect \"$ROOT_PROMPT\"
65     "
66     echo ""
67 done
hostsAll.sh

4.6.4 脚本说明

  • 关键字 hadoop 一定要在 hosts 文件中包含,不然 sed 删除失败(详见脚本内容)。

 

4.7 配置所有主机的hosts文件的脚本——hadoopAll.sh

4.7.1 介绍

对hadoop的安装、配置,用hadoopAll.sh可以实现该功能。

4.7.2 步骤概述

1)根据env.conf更新hadoop配置文件,并对配置文件进行打包

2)拷贝hadoop打包的配置文件、hadoop安装压缩包到主机上

2)解压压缩包

3)安装、配置hadoop运行环境(会事先删除旧的hadoop运行环境)

4.7.3 脚本内容

bubuko.com,布布扣
  1 #!/bin/bash
  2 
  3 # PurPose:    Config Hadoop env
  4 # Author:    13Bear
  5 # Update:    2014-04-25 17:18
  6 
  7 # Exit value:
  8 # 0    All succeed
  9 # 1    Some failed
 10 # 98    expect not install
 11 # 99    Usage format error
 12 
 13 # Import env variable
 14 source ./conf/env.config
 15 
 16 # Prepare works
 17 # 1. hadoop-env.xml update with variables in env.config
 18 sed -i "s#JAVA_HOME=.*#JAVA_HOME=$JAVA_HOME#" $CONF_LOC/hadoop-env.sh
 19 
 20 # 2. core-site.xml update with variables in env.config
 21 #sed -i "/<name>hadoop.tmp.dir<\/name>/,/<\/value>/c \\
 22 #        <name>hadoop.tmp.dir</name>\n 23 #        <value>$HADOOP_TMP_DIR</value>" $CONF_LOC/core-site.xml
 24 sed -i "/hadoop\.tmp\.dir/{n;s#<value>.*</value>#<value>$HADOOP_TMP_DIR</value>#}" $CONF_LOC/core-site.xml
 25 #sed -i "s#hdfs://.*:9000#hdfs://$MASTER:9000#" $CONF_LOC/core-site.xml
 26 sed -i "/fs\.default\.name/{n;s#<value>.*</value>#<value>hdfs://$MASTER:9000</value>#}" $CONF_LOC/core-site.xml
 27 
 28 # 3. hdfs-site.xml update with variables in env.config
 29 #sed -i "/<name>dfs.replication<\/name>/,/<\/value>/c \\
 30 #        <name>dfs.replication<\/name>\n 31 #        <value>$HADOOP_DFS_REPLICATION</value>" $CONF_LOC/hdfs-site.xml
 32 sed -i "/dfs\.replication/{n;s#<value>.*</value>#<value>$HADOOP_DFS_REPLICATION</value>#}" $CONF_LOC/hdfs-site.xml
 33 
 34 # 4. mapred-site.xml update with variables in env.config
 35 #sed -i "s#http://.*:9001#http://$MASTER:9001#" $CONF_LOC/mapred-site.xml
 36 sed -i "/mapred\.job\.tracker/{n;s#<value>.*</value>#<value>http://$MASTER:9001</value>#}" $CONF_LOC/mapred-site.xml
 37 
 38 # 5. hadoop.env update with variables in env.config
 39 sed -i "s#HADOOP_HOME=.*#HADOOP_HOME=$HADOOP_HOME#" $CONF_LOC/hadoop.env
 40 
 41 # 6. tar all conf files - will scp to all hosts for hadoop configuration
 42 tar -czf conf.tar.gz conf
 43 
 44 # Start
 45 echo ""
 46 echo "+++++++++++++++++++++++++++++++++++++++++++++++ Hadoop Configuration Process +++++++++++++++++++++++++++++++++++++++++++++++"
 47 echo ""
 48 
 49 # Check the expect tools installation
 50 if [ $EXPECTCHK != 1 ]; then
 51     echo "###############################################"
 52     echo "Please install the \"expect\" package first on all nodes to allow the script to run"
 53     echo "yum -y install expect"
 54     echo "###############################################"
 55     exit 98
 56 fi
 57 
 58 # Configuration Hadoop env on all hosts
 59 for host in $HOSTS
 60 do
 61     echo "###############################################"
 62     echo "Configuring Hadoop env on $host"
 63     echo "###############################################"
 64     echo "==========> 1.Copying Hadoop package to $host"
 65     expect -c "
 66         set timeout $EXPECT_TIMEOUT_BIG
 67         spawn scp $TOOLS_LOC/$HADOOP_PKGNAME root@$host:$HADOOP_INSTLOC
 68         expect \"yes/no\" {
 69             send \"yes\r\"
 70             expect \"password:\"
 71             send -- \"$ROOT_USERPWD\r\"
 72         } \"password:\" {
 73             send -- \"$ROOT_USERPWD\r\"
 74         }
 75         expect \"$ROOT_PROMPT\"
 76     "
 77     echo "==========> 2.Extracting Hadoop file to $HADOOP_INSTLOC on $host"
 78     expect -c "
 79         set timeout $EXPECT_TIMEOUT_BIG
 80         spawn ssh root@$host
 81         expect \"password:\"
 82             send -- \"$ROOT_USERPWD\r\"
 83         expect \"$ROOT_PROMPT\"
 84             send -- \"rm -rf $HADOOP_HOME \r\"
 85         expect \"$ROOT_PROMPT\"
 86             send -- \"mkdir $HADOOP_INSTLOC 2>/dev/null \r\"
 87         expect \"$ROOT_PROMPT\"
 88             send -- \"tar -xzf $HADOOP_INSTLOC/$HADOOP_PKGNAME -C $HADOOP_INSTLOC \r\"
 89         expect \"$ROOT_PROMPT\"
 90             send -- \"chown -R $HADOOP_USERNAME:$HADOOP_USERNAME $HADOOP_HOME \r\"
 91         expect \"$ROOT_PROMPT\"
 92             send -- \"rm -f $HADOOP_INSTLOC/$HADOOP_PKGNAME \r\"
 93         expect \"$ROOT_PROMPT\"
 94     "
 95     echo ""
 96     echo "==========> 3.Configuring Hadoop on $host"
 97     echo "=====> 3.0 Copying Hadoop config files conf.tar.gz to $host"
 98     expect -c "
 99         set timeout $EXPECT_TIMEOUT_BIG
100         spawn scp conf.tar.gz root@$host:~/
101         expect \"password:\"
102         send -- \"$ROOT_USERPWD\r\"
103         expect \"$ROOT_PROMPT\"
104     "
105     echo "=====> 3.1 Configing Hadoop env to /etc/profile  on $host"
106     expect -c "
107         set timeout $EXPECT_TIMEOUT_BIG
108         spawn ssh root@$host
109         expect \"password:\"
110             send -- \"$ROOT_USERPWD\r\"
111         expect \"$ROOT_PROMPT\"
112             send -- \"tar -xzf conf.tar.gz \r\"
113         expect \"$ROOT_PROMPT\"
114             send -- \"sed -i ‘/hadoop/Id‘ /etc/profile \r\"
115         expect \"$ROOT_PROMPT\"
116             send -- \"cat ~/conf/hadoop.env >> /etc/profile \r\"
117         expect \"$ROOT_PROMPT\"
118             send -- \"source /etc/profile \r\"
119         expect \"$ROOT_PROMPT\"
120             send -- \"hadoop version \r\"
121         expect \"$ROOT_PROMPT\"
122     "    
123     echo ""
124     echo "=====> 3.2 Configing hadoop-env.sh file on $host"
125     expect -c "
126         set timeout $EXPECT_TIMEOUT_BIG
127         spawn ssh root@$host
128         expect \"password:\"
129             send -- \"$ROOT_USERPWD\r\"
130         expect \"$ROOT_PROMPT\"
131             send -- \"cat ~/conf/hadoop-env.sh >> $HADOOP_CONFLOC/hadoop-env.sh \r\"
132         expect \"$ROOT_PROMPT\"
133             send -- \"tail -2 $HADOOP_CONFLOC/hadoop-env.sh \r\"
134         expect \"$ROOT_PROMPT\"
135     "    
136     echo ""
137     echo "=====> 3.3 Configing core-site.xml file on $host"
138     expect -c "
139         set timeout $EXPECT_TIMEOUT_BIG
140         spawn ssh root@$host
141         expect \"password:\"
142             send -- \"$ROOT_USERPWD\r\"
143         expect \"$ROOT_PROMPT\"
144             send -- \"sed -i ‘/<\\\/configuration>/d‘ $HADOOP_CONFLOC/core-site.xml \r\"
145         expect \"$ROOT_PROMPT\"
146             send -- \"cat ~/conf/core-site.xml >> $HADOOP_CONFLOC/core-site.xml \r\"
147         expect \"$ROOT_PROMPT\"
148             send -- \"cat $HADOOP_CONFLOC/core-site.xml | grep -v ‘^\\\s*\$‘ \r\"
149         expect \"$ROOT_PROMPT\"
150     "
151     echo ""
152     echo "=====> 3.4 Configing hdfs-site.xml file on $host"
153     expect -c "
154         set timeout $EXPECT_TIMEOUT_BIG
155         spawn ssh root@$host
156         expect \"password:\"
157             send -- \"$ROOT_USERPWD\r\"
158         expect \"$ROOT_PROMPT\"
159             send -- \"sed -i ‘/<\\\/configuration>/d‘ $HADOOP_CONFLOC/hdfs-site.xml \r\"
160         expect \"$ROOT_PROMPT\"
161             send -- \"cat ~/conf/hdfs-site.xml >> $HADOOP_CONFLOC/hdfs-site.xml \r\"
162         expect \"$ROOT_PROMPT\"
163             send -- \"cat $HADOOP_CONFLOC/hdfs-site.xml | grep -v ‘^\\\s*\$‘ \r\"
164         expect \"$ROOT_PROMPT\"
165     "
166     echo ""
167     echo "=====> 3.5 Configing mapred-site.xml file on $host"
168     expect -c "
169         set timeout $EXPECT_TIMEOUT_BIG
170         spawn ssh root@$host
171         expect \"password:\"
172             send -- \"$ROOT_USERPWD\r\"
173         expect \"$ROOT_PROMPT\"
174             send -- \"sed -i ‘/<\\\/configuration>/d‘ $HADOOP_CONFLOC/mapred-site.xml \r\"
175         expect \"$ROOT_PROMPT\"
176             send -- \"cat ~/conf/mapred-site.xml >> $HADOOP_CONFLOC/mapred-site.xml \r\"
177         expect \"$ROOT_PROMPT\"
178             send -- \"cat $HADOOP_CONFLOC/mapred-site.xml | grep -v ‘^\\\s*\$‘ \r\"
179         expect \"$ROOT_PROMPT\"
180     "
181     echo ""
182     echo "=====> 3.6 Configing masters file on $host"
183     expect -c "
184         set timeout $EXPECT_TIMEOUT_BIG
185         spawn ssh root@$host
186         expect \"password:\"
187             send -- \"$ROOT_USERPWD\r\"
188         expect \"$ROOT_PROMPT\"
189             send -- \"cat ~/conf/masters > $HADOOP_CONFLOC/masters \r\"
190         expect \"$ROOT_PROMPT\"
191             send -- \"cat $HADOOP_CONFLOC/masters \r\"
192         expect \"$ROOT_PROMPT\"
193     "
194     echo ""
195     echo "=====> 3.7 Configing slaves file on $host"
196     expect -c "
197         set timeout $EXPECT_TIMEOUT_BIG
198         spawn ssh root@$host
199         expect \"password:\"
200             send -- \"$ROOT_USERPWD\r\"
201         expect \"$ROOT_PROMPT\"
202             send -- \"cat ~/conf/slaves > $HADOOP_CONFLOC/slaves \r\"
203         expect \"$ROOT_PROMPT\"
204             send -- \"cat $HADOOP_CONFLOC/slaves \r\"
205         expect \"$ROOT_PROMPT\"
206             send -- \"rm -rf conf; rm -f conf.tar.gz \r\"
207         expect \"$ROOT_PROMPT\"
208     "
209     echo ""
210 done
211 
212 # Remove conf tar file in local
213 rm -f conf.tar.gz
hadoopAll.sh

4.6.4 脚本说明

  • 各步骤都添加了注释(详见脚本内容)。

 

五、卸载脚本

 5.1 主安装程序脚本——installAll.sh

5.1.1 介绍

整个环境的卸载分为若干部分,对各部分单独编写了对应的脚本,各脚本可以独立运行的(后续介绍),即:

  • 测试各主机连通性的pingAll.sh
  • 卸载各主机hadoop的hadoopAll.sh
  • 删除各主机hosts文件中hadoop信息的hostsAll.sh
  • 删除各主机java环境的javaAll.sh
  • 删除各主机hadoop用户的userdelAll.sh

uninstall.sh主要是组织这些单独的脚本进行整个hadoop环境的卸载。

5.1.2 步骤概述

1)引入env.conf文件中的环境变量。

2)检查各主机网络是否连通,如果有一台不能ping通,则退出。

3)检查hadoop是否运行,如果运行中,则退出。

4)hadoop如果没有运行,则进行整个环境的卸载。

5.1.3 脚本内容

bubuko.com,布布扣
 1 #!/bin/bash
 2 
 3 # Purpose:    Uninstall Hadoop
 4 # Author:       13Bear
 5 # Update:       2014-04-25 17:18
 6 
 7 # Exit value:
 8 # 0     Uninstall succeed
 9 # 1     Some hosts cant be reached
10 # 2    Hadoop is running
11 
12 # Import env variable
13 source ./conf/env.config
14 
15 # Make sure all the hosts can be reach
16 ./pingAll.sh
17 if [ $? -eq 1 ]; then
18     echo "###############################################"
19     echo -e "\033[31mSome nodes can‘t be reached! Please check them!\033[0m"
20     echo "###############################################"
21     exit 1
22 fi
23 
24 # Check the hadoop running status
25 ./hadoopStatusCheck.sh
26 if [ $? -eq 1 ]; then
27     exit 2
28 fi
29 
30 # Start uninstall
31 echo ""
32 echo "++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++"
33 echo "++++++++++++++++++++++++++++++++++++++++++++++ Start Uninstall +++++++++++++++++++++++++++++++++++++++++++++++"
34 echo "++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++"
35 echo ""
36 ./unhadoopAll.sh
37 ./unhostsAll.sh
38 ./unjavaAll.sh
39 ./userdelAll.sh
40 echo ""
41 echo "++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++"
42 echo "++++++++++++++++++++++++++++++++++++++++++++++++ End Uninstall +++++++++++++++++++++++++++++++++++++++++++++++"
43 echo "++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++"
44 echo ""
45 exit 0
uninstallAll.sh

5.1.4 脚本说明

  •  卸载完毕,$?=0

 

5.2 测试hadoop是否运行的脚本——hadoopStatusCheck.sh

5.2.1 介绍

进行hadoop自动化卸载之前,需要测试hadoop是否在运行中,用hadoopStatusCheck.sh可以实现该功能。

5.2.2 步骤概述

1)对masters文件中指定的hadoop环境里master主机进行测试,检查hadoop是否在运行。

2)hadoop如果没有运行,$?=0,如果在运行,$?=1

5.2.3 脚本内容

bubuko.com,布布扣
 1 #!/bin/bash
 2 
 3 # PurPose:    Config Hadoop env
 4 # Author:    13Bear
 5 # Update:    2014-04-25 17:18
 6 
 7 # Exit value:
 8 # 0    All succeed
 9 # 1    Some failed
10 # 98    expect not install
11 # 99    Usage format error
12 
13 # Import env variable
14 source ./conf/env.config
15 
16 # Start
17 echo ""
18 echo "+++++++++++++++++++++++++++++++++++++++++++++++ Hadoop Status Check +++++++++++++++++++++++++++++++++++++++++++++++"
19 echo ""
20 
21 # Check the expect tools installation
22 if [ $EXPECTCHK != 1 ]; then
23     echo "###############################################"
24     echo "Please install the \"expect\" package first on all nodes to allow the script to run"
25     echo "yum -y install expect"
26     echo "###############################################"
27     exit 98
28 fi
29 
30 # Checking Hadoop Status on Master
31 hadoop_status=0
32 echo "###############################################"
33 echo "Hadoop Status Check on Master -> $MASTER"
34 echo "###############################################"
35 expect -c "
36     set timeout $EXPECT_TIMEOUT_BIG
37     spawn ssh root@$MASTER
38     expect \"yes/no\" {
39         send \"yes\r\"
40         expect \"password:\"
41         send -- \"$ROOT_USERPWD\r\"
42     } \"password:\" {
43         send -- \"$ROOT_USERPWD\r\"
44     }
45     expect \"$ROOT_PROMPT\"
46         send -- \"jps\r\"
47     expect \"NameNode\" {
48         expect \"$ROOT_PROMPT\"
49         exit 1
50     } \"$ROOT_PROMPT\" {
51         exit 0
52     }
53 "
54 hadoop_status=$?
55 if [ $hadoop_status -eq 1 ]; then
56     echo ""
57     echo "###############################################"
58     echo -e "\033[31mHadoop is Running!\033[0m"
59     echo "###############################################"
60     exit 1
61 elif [ $hadoop_status -eq 0 ]; then
62     echo ""
63     echo "###############################################"
64     echo -e "\033[32mHadoop is Down!\033[0m"
65     echo "###############################################"
66     exit 0
67 fi
hadoopStatusCheck.sh

5.2.4 脚本说明

  • hadoop如果没有运行,$?=0,如果在运行,$?=1

 

5.2 其他卸载子脚本——unhadoopAll.sh、unhostsAll.sh、unjavaAll.sh、userdelAll.sh

5.2.1 介绍

卸载脚本比较简单,可以单独使用。

5.2.2 步骤概述

1)基本上就是依次登录所有主机,然后做删除的操作。

5.2.3 脚本内容

bubuko.com,布布扣
 1 #!/bin/bash
 2 
 3 # PurPose:    Uninstall Hadoop env
 4 # Author:    13Bear
 5 # Update:    2014-04-25 17:18
 6 
 7 # Exit value:
 8 # 0    All succeed
 9 # 1    Some failed
10 # 98    expect not install
11 # 99    Usage format error
12 
13 # Import env variable
14 source ./conf/env.config
15 
16 # Start
17 echo ""
18 echo "+++++++++++++++++++++++++++++++++++++++++++++++ Uninstall Hadoop Process +++++++++++++++++++++++++++++++++++++++++++++++"
19 echo ""
20 
21 # Check the expect tools installation
22 if [ $EXPECTCHK != 1 ]; then
23     echo "###############################################"
24     echo "Please install the \"expect\" package first on all nodes to allow the script to run"
25     echo "yum -y install expect"
26     echo "###############################################"
27     exit 98
28 fi
29 
30 # Uninstall Hadoop on all hosts
31 for host in $HOSTS
32 do
33     echo "###############################################"
34     echo "Uninstalling Hadoop env on $host"
35     echo "###############################################"
36     expect -c "
37         set timeout $EXPECT_TIMEOUT_BIG
38         spawn ssh root@$host
39         expect \"yes/no\" {
40             send \"yes\r\"
41             expect \"password:\"
42             send -- \"$ROOT_USERPWD\r\"
43         } \"password:\" {
44             send -- \"$ROOT_USERPWD\r\"
45         }
46         expect \"$ROOT_PROMPT\"
47             send -- \"rm -rf $HADOOP_HOME \r\"
48         expect \"$ROOT_PROMPT\"
49             send -- \"sed -i ‘/hadoop/Id‘ /etc/profile \r\"
50         expect \"$ROOT_PROMPT\"
51             send -- \"rm -rf /tmp/hadoop-*;rm -rf /tmp/hsperfdata_*;rm -rf /tmp/Jetty_0_0_0_0_* \r\"
52         expect \"$ROOT_PROMPT\"
53     "
54     echo ""
55 done
unhadoopAll.sh
bubuko.com,布布扣
 1 #!/bin/bash
 2 
 3 # PurPose:    Remove hadoop hosts in /etc/hosts
 4 # Author:    13Bear
 5 # Update:    2014-04-25 17:18
 6 
 7 # Exit value:
 8 # 0    All succeed
 9 # 1    Some failed
10 # 98    expect not install
11 # 99    Usage format error
12 
13 # Import env variable
14 source ./conf/env.config
15 
16 # Start
17 echo ""
18 echo "+++++++++++++++++++++++++++++++++++++++++++++++ Remove hadoop hosts in /etc/hosts Process +++++++++++++++++++++++++++++++++++++++++++++++"
19 echo ""
20 
21 # Check the expect tools installation
22 if [ $EXPECTCHK != 1 ]; then
23     echo "###############################################"
24     echo "Please install the \"expect\" package first on all nodes to allow the script to run"
25     echo "yum -y install expect"
26     echo "###############################################"
27     exit 98
28 fi
29 
30 # Remove hadoop hosts in /etc/hosts for all hosts
31 for host in $HOSTS
32 do
33     echo "###############################################"
34     echo "Removing hadoop hosts in /etc/hosts on $host"
35     echo "###############################################"
36     # sed 命令查找删除的关键字 hadoop 一定要在 hosts 文件中包含,不然 sed 删除失败
37     expect -c "
38         set timeout $EXPECT_TIMEOUT_BIG
39         spawn ssh root@$host
40         expect \"yes/no\" {
41             send \"yes\r\"
42             expect \"password:\"
43             send -- \"$ROOT_USERPWD\r\"
44         } \"password:\" {
45             send -- \"$ROOT_USERPWD\r\"
46         }
47         expect \"$ROOT_PROMPT\"
48             send -- \"sed -i ‘/hadoop/Id‘ /etc/hosts \r\"
49         expect \"$ROOT_PROMPT\"
50     "
51     echo ""
52 done
unhostsAll.sh
bubuko.com,布布扣
 1 #!/bin/bash
 2 
 3 # PurPose:    Uninstall Java env
 4 # Author:    13Bear
 5 # Update:    2014-04-25 17:18
 6 
 7 # Exit value:
 8 # 0    All succeed
 9 # 1    Some failed
10 # 98    expect not install
11 # 99    Usage format error
12 
13 # Import env variable
14 source ./conf/env.config
15 
16 # Start
17 echo ""
18 echo "+++++++++++++++++++++++++++++++++++++++++++++++ Uninstall Java Process +++++++++++++++++++++++++++++++++++++++++++++++"
19 echo ""
20 
21 # Check the expect tools installation
22 if [ $EXPECTCHK != 1 ]; then
23     echo "###############################################"
24     echo "Please install the \"expect\" package first on all nodes to allow the script to run"
25     echo "yum -y install expect"
26     echo "###############################################"
27     exit 98
28 fi
29 
30 # Uninstall Java for all hosts
31 for host in $HOSTS
32 do
33     echo "###############################################"
34     echo "Uninstalling Java env on $host"
35     echo "###############################################"
36     expect -c "
37         set timeout $EXPECT_TIMEOUT_BIG
38         spawn ssh root@$host
39         expect \"yes/no\" {
40             send \"yes\r\"
41             expect \"password:\"
42             send -- \"$ROOT_USERPWD\r\"
43         } \"password:\" {
44             send -- \"$ROOT_USERPWD\r\"
45         }
46         expect \"$ROOT_PROMPT\"
47             send -- \"rm -rf $JAVA_HOME \r\"
48         expect \"$ROOT_PROMPT\"
49             send -- \"sed -i ‘/java/Id‘ /etc/profile \r\"
50         expect \"$ROOT_PROMPT\"
51     "
52     echo ""
53 done
unjavaAll.sh
bubuko.com,布布扣
 1 #!/bin/bash
 2 
 3 # Purpose:    Config the hadoop envs ssh no password login
 4 # Author:       13Bear
 5 # Update:       2014-04-25 17:18
 6 
 7 # Exit value:
 8 # 0     All succeed
 9 # 1     Some failed
10 # 98    expect not install
11 # 99    Usage format error
12 
13 # Import env variable
14 source ./conf/env.config
15 
16 # Start
17 echo ""
18 echo "+++++++++++++++++++++++++++++++++++++++++++++++ Userdel Process +++++++++++++++++++++++++++++++++++++++++++++++" 
19 echo ""
20 
21 # Check the expect tools installation
22 if [ $EXPECTCHK != 1 ]; then
23         echo "###############################################"
24         echo "Please install the \"expect\" package first on all nodes to allow the script to run"
25         echo "yum -y install expect"
26         echo "###############################################"
27         exit 98
28 fi
29 
30 # Delete hadoop user for every host
31 for host in $HOSTS
32 do
33     echo "###############################################"
34     echo "Deleting hadoop user \"$HADOOP_USERNAME\" for host $host"
35     echo "###############################################"
36     expect -c "
37         set timeout $EXPECT_TIMEOUT
38         spawn ssh root@$host
39         expect \"yes/no\" {
40             send \"yes\r\"
41             expect \"password:\"
42             send -- \"$ROOT_USERPWD\r\"
43         } \"password:\" {
44             send -- \"$ROOT_USERPWD\r\"
45         }
46         expect \"$ROOT_PROMPT\"
47             send -- \"userdel -r $HADOOP_USERNAME\r\"
48         expect \"$ROOT_PROMPT\"
49     "
50     echo ""
51 done
userdelAll.sh

 

用shell脚本自动化安装hadoop,布布扣,bubuko.com

用shell脚本自动化安装hadoop

标签:des   style   c   class   blog   code   

原文地址:http://www.cnblogs.com/13bear/p/3700902.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!