Hadoop集群一键启动和关闭脚本

  • Post author:
  • Post category:其他

hadoop在启动集群时,需要启动hdfs和yarn集群。启动hdfs和yarn可以使用start-dfs.sh ,start-yarn.sh 或者单节点一个一个启动,但是这样比较麻烦,所以这里准备写一个脚本,一键启动hdfs和yarn。

群起脚本

#!/bin/bash
echo "*********************正在开启集群服务****************************"
echo "*********************正在开启namenode节点************************"

ssh admin@hadoop-senior01.atguigu.com '/opt/modules/hadoop-2.5.0-cdh5.3.6/sbin/hadoop-daemon.sh start namenode'
echo "*********************正在开启datanode节点************************"
for i in admin@hadoop-senior01.atguigu.com admin@hadoop-senior02.atguigu.com admin@hadoop-senior03.atguigu.com
do
        ssh $i '/opt/modules/hadoop-2.5.0-cdh5.3.6/sbin/hadoop-daemon.sh start datanode'
done

echo "*********************正在开启secondarynamenode节点************************"

ssh admin@hadoop-senior03.atguigu.com '/opt/modules/hadoop-2.5.0-cdh5.3.6/sbin/hadoop-daemon.sh start secondarynamenode'

echo "*********************正在开启ResourceManager节点************************"

ssh admin@hadoop-senior02.atguigu.com '/opt/modules/hadoop-2.5.0-cdh5.3.6/sbin/yarn-daemon.sh start resourcemanager'

echo "*********************正在开启nodeManager节点************************"

for i in admin@hadoop-senior01.atguigu.com admin@hadoop-senior02.atguigu.com admin@hadoop-senior03.atguigu.com
do
        ssh $i '/opt/modules/hadoop-2.5.0-cdh5.3.6/sbin/yarn-daemon.sh start datamanager'
done

echo "*********************正在开启jobhistoryServer节点************************"

ssh admin@hadoop-senior01.atguigu.com '/opt/modules/hadoop-2.5.0-cdh5.3.6/sbin/mr-jobhistory-daemon.sh start historyserver'

关闭脚本

#!/bin/bash
o "*********************正在关闭集群服务****************************"

echo "*********************正在关闭jobhistoryServer节点************************"

ssh admin@hadoop-senior01.atguigu.com  '/opt/modules/hadoop-2.5.0-cdh5.3.6/sbin/mr-jobhistory-daemon.sh stop historyserver'

echo "*********************正在关闭ResourceManager节点************************"

ssh admin@hadoop-senior02.atguigu.com '/opt/modules/hadoop-2.5.0-cdh5.3.6/sbin/yarn-daemon.sh stop resourcemanager'

echo "*********************正在关闭nodeManager节点************************"

for i in admin@hadoop-senior01.atguigu.com admin@hadoop-senior02.atguigu.com admin@hadoop-senior03.atguigu.com
do
        ssh $i '/opt/modules/hadoop-2.5.0-cdh5.3.6/sbin/yarn-daemon.sh stop datamanager'
done

echo "*********************正在关闭secondarynamenode节点************************"

ssh admin@hadoop-senior03.atguigu.com '/opt/modules/hadoop-2.5.0-cdh5.3.6/sbin/hadoop-daemon.sh stop secondarynamenode'

echo "*********************正在关闭namenode节点************************"

ssh admin@hadoop-senior01.atguigu.com '/opt/modules/hadoop-2.5.0-cdh5.3.6/sbin/hadoop-daemon.sh stop namenode'
echo "*********************关闭datanode节点************************"
for i in admin@hadoop-senior01.atguigu.com admin@hadoop-senior02.atguigu.com admin@hadoop-senior03.atguigu.com
do
        ssh $i '/opt/modules/hadoop-2.5.0-cdh5.3.6/sbin/hadoop-daemon.sh stop datanode'
done

这里需要注意的是,关闭节点是整体意义上的倒序,而不是细节上的倒序。hdfs和yarn 在关闭的时候需要先关闭namenode和resourceManger。在启动脚本里面我们的开启顺序是namenode->datanode->secondarynamenode->ResourceManager
->nodeManager->jobhistoryServer
而关闭的顺序应该是:
jobhistoryServer->resourceManager->nodeManager->->secondarynamenode->
namenode->datanode
这是因为集群中可能还有工作,如果先关datanode的话,namenode检查到集群需要进入安全模式了,后面再往里面写数据可能就会报错。


版权声明:本文为chen7588693原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。