手动安装Hadoop3.3.1+ Yarn HA集群

  • Post author:
  • Post category:其他




手动安装Hadoop3.3.1+ Yarn HA集群



服务

  1. 两个 NameNode

    双机热备高可用

  2. ZooKeeper 集群

  3. ZKFailoverController监控NameNode状态

  4. JournalNode 元数据共享

  5. NodeManager

  6. ResourceManager

  7. DataNode

  8. JobHistoryServer



主机ip规划

角色/主机名 Node01

192.168.7.11
Node02

192.168.7.12
Node03

192.168.7.13
Node04

192.168.7.14
Node05

192.168.7.15
NameNode
JournalNode
DataNode
ResourceManager
NodeManager
JobHistoryServer
Zookeeper
Hbase Master
Hbase slave
HiveServer2
Hive
HBase
Spark
Flink



准备工作

# 配置环境变量

export JAVA_HOME=/usr/java/default
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:$PATH
export HADOOP_HOME=/opt/bigdata/hadoop/current
export HADOOP_MAPRED_HOME=${HADOOP_HOME}
export HADOOP_COMMON_HOME=${HADOOP_HOME}
export HADOOP_HDFS_HOME=${HADOOP_HOME}
export HADOOP_YARN_HOME=${HADOOP_HOME}
export HADOOP_CONF_DIR=/opt/bigdata/hadoop/current/etc/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

export ZOOKEEPER_HOME=/opt/bigdata/zookeeper-3.4.6
export PATH=$PATH:$ZOOKEEPER_HOME/bin

export HIVE_HOME=/opt/bigdata/hive/current
export PATH=$PATH:$HIVE_HOME/bin
export SPARK_HOME=/opt/bigdata/spark/current
export PATH=$PATH:$SPARK_HOME/bin


# 将上面的粘贴到/etc/profile末尾,然后一起发给其他主机,没有服务没关系
for i in {2,3,4,5};do scp /etc/profile node0${i}:/etc/profile ;done
# 安装jdk

# 各主机秘钥互通
  在node01上生成一份秘钥
  ssh-keygen -t rsa -b 4096 -C "bigdata ssh" -f /home/god/.ssh/id_rsa -q 
  自己给自己发公钥 ssh-copy-id node01

  # 重点来了
  authorized_keys不用变
  cd ~/.ssh
  private_host=`cat known_hosts |awk -F ' ' '{print $2,$NF}'`
  for i in {2..5};do echo "node0${i},192.168.7.1${i} ${private_host}" >> known_hosts ;done

  scp -Crp .ssh node02:~/
  scp -Crp .ssh node03:~/
  scp -Crp .ssh node04:~/
  scp -Crp .ssh node05:~/
  # 这样5台主机就可以互通了哈哈

# 每个主机配置host解析
echo "
192.168.7.11 node01
192.168.7.12 node02
192.168.7.13 node03
192.168.7.14 node04
192.168.7.15 node05
" >> /etc/hosts

# 给god用户授权目录
chown -R god:god /opt/
chown -R god:god /data/



启动zookeeper

tar xf zookeeper-3.4.6.tar.gz -C /opt/bigdata/

cd zookeeper-3.4.6/conf
cat > zoo.cfg <<-EOF
tickTime=2000
initLimit=20
syncLimit=10
dataDir=/data/bigdata/zookeeper/
dataLogDir=/data/log/zookeeper/
clientPort=2181
quorumListenOnAllIPs=true
server.1=node03:2888:3888
server.2=node04:2888:3888
server.3=node05:2888:3888
EOF

# 分发zookeeper
[god@node03 opt]$ scp -Crp zookeeper-3.4.6 node04:~/opt/
[god@node03 opt]$ scp -Crp zookeeper-3.4.6 node05:~/opt/
# 每个zookeeper节点执行
[root@node03 ~]$ mkdir -p /data/{bigdata,log}/zookeeper
[root@node03 ~]$ echo 1 > /data/bigdata/zookeeper/myid # 根据配置文件在对应的节点配置myid
[root@node04 ~]$ mkdir -p /data/{bigdata,log}/zookeeper
[root@node04 ~]$ echo 2 > /data/bigdata/zookeeper/myid # 根据配置文件在对应的节点配置myid
[root@node05 ~]$ mkdir -p /data/{bigdata,log}/zookeeper
[root@node05 ~]$ echo 3 > /data/bigdata/zookeeper/myid # 根据配置文件在对应的节点配置myid

cat >> /etc/profile <<-EOF
export ZOOKEEPER_HOME=/opt/bigdata/zookeeper-3.4.6
export PATH=\$PATH:\$ZOOKEEPER_HOME/bin
EOF

# 给god用户授权目录
chown -R god:god /opt/
chown -R god:god /data/

# 启动zookeeper
	node03
		[god@node03 ~]$ zkServer.sh start
	node04
		[god@node04 ~]$ zkServer.sh start
	node05
		[god@node05 ~]$ zkServer.sh start
	查看状态
		zkServer.sh status



Hadoop配置文件

  • profile文件
cat >> /etc/profile <<-EOF

export HADOOP_HOME=/opt/bigdata/hadoop/current
export PATH=\$PATH:\$HADOOP_HOME/bin
EOF


# 设置软连接,要到每台机器上执行,不能直接复制已建立连接的
ln -s /opt/bigdata/hadoop/hadoop-3.3.1 /opt/bigdata/hadoop/current

#指定java启动目录,ssh登录找不到
	echo "JAVA_HOME=/usr/java/default" >> /opt/bigdata/hadoop/current/etc/hadoop/hadoop-env.sh

  • core-site.xml 文件
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/var/tmp/hadoop-${user.name}</value>
        <description>默认值</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://mycluster</value>
        <description>hdfs集群名称</description>
    </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>node03:2181,node04:2181,node05:2181</value>
        <description>zookeeper的节点</description>
    </property>
    <property>
        <name>fs.trash.interval</name>
        <value>60</value>
        <description>清理回收站时间 60分钟</description>
    </property>
</configuration>

  • hdfs-site.xml 文件
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
        <description>副本数</description>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:///data/bigdata/hadoop/ha/dfs/namenode</value>
        <description>namenode的数据位置</description>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:///data/bigdata/hadoop/ha/dfs/data</value>
        <description>datanode的数据</description>
    </property>
    <property>
        <name>dfs.nameservices</name>
        <value>mycluster</value>
        <description>名称服务</description>
    </property>
    <property>
        <name>dfs.ha.namenodes.mycluster</name>
        <value>nn1,nn2</value>
        <description>名称服务下的NameNode</description>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.mycluster.nn1</name>
        <value>node01:8020</value>
        <description>NameNode地址</description>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.mycluster.nn2</name>
        <value>node02:8020</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.mycluster.nn1</name>
        <value>node01:50070</value>
        <description>NameNode http 地址</description>
    </property>
    <property>
        <name>dfs.namenode.http-address.mycluster.nn2</name>
        <value>node02:50070</value>
        <description>NameNode2 http 地址</description>
    </property>
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://node01:8485;node02:8485;node03:8485/mycluster</value>
        <description>journalnode配置</description>
    </property>
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/data/bigdata/hadoop/ha/dfs/journalnode</value>
        <description>editlog日志存储目录</description>
    </property>
    <property>
        <name>dfs.client.failover.proxy.provider.mycluster</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
        <description>主NameNode是哪个</description>
    </property>
    <!--ssh免密操作 zkfc机器要互免密还有自己对自己-->
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/god/.ssh/id_rsa</value>
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
        <description>开启zkfc随集群启动</description>
    </property>
    <property>
        <name>dfs.block.size</name>
        <value>134217728</value>
    </property>
    <property>
        <name>dfs.permissions</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.permissions.supergroup</name>
        <value>supergroup</value>
    </property>

    <property>
        <name>dfs.hosts</name>
        <value>/opt/bigdata/hadoop/current/etc/hadoop/hosts</value>
        <description>主机下线的配置 记录有哪些主机</description>
    </property>
    <property>
        <name>dfs.hosts.exclude</name>
        <value>/opt/bigdata/hadoop/current/etc/hadoop/hosts-exclude</value>
    </property>
</configuration>

  • mapred-site.xml 文件
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>node02:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>node02:19888</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.intermediate-done-dir</name>
        <value>/data/bigdata/jobhistory/mr_history_tmp</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.done-dir</name>
        <value>/data/bigdata/jobhistory/mr-history_done</value>
    </property>
    <property>
        <name>yarn.app.mapreduce.am.staging-dir</name>
        <value>/data/bigdata/hadoop-yarn/staging</value>
    </property>
</configuration>

  • yarn-site.xml 文件
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
        <description>洗牌</description>
    </property>
    <!--安装spark后开启多个shuffle-->
    <!--<property>-->
    <!--    <name>yarn.nodemanager.aux-services</name>-->
    <!--    <value>mapreduce_shuffle,spark_shuffle</value>-->
    <!--</property>-->

    <!--<property>-->
    <!--    <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>-->
    <!--    <value>org.apache.hadoop.mapred.ShuffleHandler</value>-->
    <!--</property>-->

    <!--<property>-->
    <!--    <name>yarn.nodemanager.aux-services.spark_shuffle.class</name>-->
    <!--    <value>org.apache.spark.network.yarn.YarnShuffleService</value>-->
    <!--</property>-->
    <!--    开启自动故障切换-->
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>
    <!--    依赖zookeeper-->
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>node03:2181,node04:2181,node05:2181</value>
    </property>
    <!--   resourcemanager集群id -->
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>meifute</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
        <description>resourcemanager所在机器</description>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>node01</value>
        <description>resourcemanager所在机器 node01</description>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>node02</value>
        <description>resourcemanager所在机器 node02</description>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address.rm1</name>
        <value>node01:8088</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address.rm2</name>
        <value>node02:8088</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address.rm1</name>
        <value>node01:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address.rm2</name>
        <value>node02:8030</value>
    </property>
    <property>
        <name>yarn.nodemanager.local-dirs</name>
        <value>${hadoop.tmp.dir}/nm-local-dir</value>
    </property>
    <property>
        <name>yarn.nodemanager.log-dirs</name>
        <value>file:///data/log/nodemanager/userlogs</value>
    </property>

    <!--<property>-->
    <!--    <description>Classpath for typical applications</description>-->
    <!--    <name>yarn.application.classpath</name>-->
    <!--    <value>-->
    <!--        $HADOOP_CONF_DIR,-->
    <!--        $HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,-->
    <!--        $HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,-->
    <!--        $HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,-->
    <!--        $HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*,-->
    <!--        $HADOOP_HOME/share/hadoop/common/*, $HADOOP_COMMON_HOME/share/hadoop/common/lib/*,-->
    <!--        $HADOOP_HOME/share/hadoop/hdfs/*, $HADOOP_HOME/share/hadoop/hdfs/lib/*,-->
    <!--        $HADOOP_HOME/share/hadoop/mapreduce/*, $HADOOP_HOME/share/hadoop/mapreduce/lib/*,-->
    <!--        $HADOOP_HOME/share/hadoop/yarn/*, $HADOOP_YARN_HOME/share/hadoop/yarn/lib/*,-->
    <!--        $HIVE_HOME/lib/*, $HIVE_HOME/lib_aux/*-->
    <!--    </value>-->
    <!--</property>-->
    <property>
        <name>yarn.application.classpath</name>
        <value>/opt/bigdata/hadoop/current/etc/hadoop:/opt/bigdata/hadoop/current/share/hadoop/common/lib/*:/opt/bigdata/hadoop/current/share/hadoop/common/*:/opt/bigdata/hadoop/current/share/hadoop/hdfs:/opt/bigdata/hadoop/current/share/hadoop/hdfs/lib/*:/opt/bigdata/hadoop/current/share/hadoop/hdfs/*:/opt/bigdata/hadoop/current/share/hadoop/mapreduce/*:/opt/bigdata/hadoop/current/share/hadoop/yarn:/opt/bigdata/hadoop/current/share/hadoop/yarn/lib/*:/opt/bigdata/hadoop/current/share/hadoop/yarn/*</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>4096</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>4</value>
    </property>
    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
    </property>
</configuration>
  • workers
cat > workers <<-EOF
node03
node04
node05
EOF

  • hosts 文件
cat > hosts <<-EOF
node03
node04
node05
EOF
  • hosts-exclude 暂时为空
touch hosts-exclude
  • 分发配置
cd /opt/bigdata/hadoop/current/etc/hadoop
for i in {2..5};do scp core-site.xml hdfs-site.xml mapred-site.xml yarn-site.xml hosts workers hosts-exclude node0${i}:`pwd` ;done



启动与维护高可用 NameNode + Yarn 分布式集群



  • 1. 启动与格式化 ZooKeeper 集群

# 确保启动zookeeper
node03
	zkServer.sh start
node04
	zkServer.sh start
node05
	zkServer.sh start
查看状态
	zkServer.sh status



  • 2. 启动 JournalNode 集群

# JournalNode 集群是安装在node01,node02,node03 三个节点的
# 在三个节点分别启动journalnode
hdfs --daemon start journalnode
hdfs --daemon stop journalnode  # 如果要关掉


  • 3. 格式化并启动主节点 NameNode 服务

# 在管理机器上执行,执行一次就好
# NameNode 服务在启动之前,需要进行格式化
hdfs namenode -format -clusterId bigdataserver(此名称可随便指定)

# 启动NameNode master
hdfs --daemon start namenode
hdfs --daemon stop namenode  # 如果有报错需要重新初始化
rm -rf /data/* # 如果有报错需要重新初始化


  • 4. NameNode 主、备节点同步元数据

# 备用的 NameNode 也需要启动服务 在node02
hdfs namenode -bootstrapStandby


  • 5. 启动备用节点的 NameNode 服务

# 在node02上
hdfs --daemon start namenode


  • 6. 启动 ZooKeeper FailoverController(zkfc)服务

# 格式化zookeeper # 在 ZooKeeper 集群上建立 HA 的相应节点信息
hdfs zkfc -formatZK # 在主控节点执行

# 首先在NameNode Master上执行
hdfs --daemon start zkfc

# 让后在另一个NameNode上执行
hdfs --daemon start zkfc


  • 7. 启动存储节点 DataNode 服务

[god@node03 ~]$ hdfs --daemon start datanode
[god@node04 ~]$ hdfs --daemon start datanode
[god@node05 ~]$ hdfs --daemon start datanode


  • 8. 启动 ResourceManager、NodeManager 及 historyserver 服务

[god@node01 hadoop]$ start-yarn.sh # 包含了下面的命令,厉害,不包括historyserver


yarn --daemon start resourcemanager # 在第一个ResourceManager上执行
yarn --daemon start resourcemanager # 在第二个ResourceManager上执行

# 在所有的NodeManager上依次启动
yarn --daemon start nodemanager
#################
# 启动historyServer
[god@node02 hadoop]$ mapred --daemon start historyserver
或
[god@node02 hadoop]$ mr-jobhistory-daemon.sh start historyserver


  • 9. 测试双 NameNode 高可用功能

验证NameNode主挂掉
	[root@node01 logs]# jps
41467 JournalNode
42972 DFSZKFailoverController
43292 Jps
42269 NameNode

	kill -9 42269
	查看到node02节点切换为active
	启动NameNode
		hadoop-daemon.sh start namenode
		或
			hdfs --daemon start namenode
			
验证zkfc挂掉
	[root@node02 logs]# jps
16562 QuorumPeerMain
24835 NameNode
18628 DataNode
25096 Jps
17530 JournalNode
18746 DFSZKFailoverController
	kill -9 18746
	查看到node01节点切换为active
	恢复zkfc
		hadoop-daemon.sh start zkfc
		
验证NameNode主机宕机,网络中断
	[root@node01 logs]# ifconfig ens33 down
	第三只手无法连接到NameNode,所以无法切换主


  • 10. 验证 Yarn 是否正常运行

http://node01:8088/cluster/nodes

cat > wc <<_EOF
hadoop spark
spark hadoop
oracle mysql postgresql
postgresql oracle mysql
mysql mongodb
hdfs yarn mapreduce
yarn hdfs
zookeeper
EOF
hdfs dfs -mkdir -p /data/wc/input/
hdfs dfs -put wc /data/wc/input

cd /opt/bigdata/hadoop/current/share/hadoop/mapreduce
hadoop jar hadoop-mapreduce-examples-3.3.1.jar wordcount /data/wc/input /data/wc/output

查看结果
hdfs dfs -text /data/wc/output/*



一些服务地址和目录

hdfs根目录      hdfs://mycluster
namenode网页:   http://node01:50070    http://node021:50070
jobhistory网页:   http://node02:19888
resourcemanager网页 http://node01:8088/cluster  
集群日志目录  /opt/bigdata/hadoop/current/logs/

本地数据存放路经   /data/bigdata/



基础命令

# 查看hadoop的类加载路径
hadoop classpath
# 报错 mapreduce 错误: 找不到或无法加载主类 org.apache.hadoop.mapreduce.v2.app.MRAppMaster
# 



版权声明:本文为qq_41878423原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。