2015-04-22 14:17:29,908 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode master/192.168.1.100:53310 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
2015-04-22 14:17:29,908 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService for Block pool BP-172857601-192.168.1.100-1429683180778 (storage id DS-1882029846-192.168.1.100-50010-1429520454466) service to master/192.168.1.100:53310
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:439)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:525)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:676)
at java.lang.Thread.run(Thread.java:744)
2015-04-22 14:17:34,910 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode master/192.168.1.100:53310 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
2015-04-22 14:17:34,910 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService for Block pool BP-172857601-192.168.1.100-1429683180778 (storage id DS-1882029846-192.168.1.100-50010-1429520454466) service to master/192.168.1.100:53310
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:439)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:525)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:676)
at java.lang.Thread.run(Thread.java:744)
原因可能是我之前有四台机器,每台机器有6块磁盘,其中有一台机器的磁盘目录与另外3台目录名称不一样,当时创建
dfs.data.dir
目录的时候,好像没创建成功,所以就做各种格式化操作了。
解决办法
1、先把集群停掉
2、删除hadoop.tmp.dir、dfs.name.dir、dfs.journalnode.edits.dir 等配置目录
3、删除dfs.data.dir目录
4、重新执行如下步骤
##在每个节点上把zookeeper服务启动
zkServer.sh start
##在某一namenode节点上执行如下命令,创建命名空间
hdfs zkfc -formatZK
##在每个节点用如下命令启日志程序
hadoop-daemon.sh start journalnode
##在主namenode节点格式化namenode和journalnode目录
hadoop namenode -format mycluster
##在主namenode节点启动namenode进程
hadoop-daemon.sh start namenode
##如下命令是把备namenode节点的目录格式化并把元数据从主namenode节点拷贝过来(在备节点执行)
hdfs namenode -bootstrapStandby
##启动备节点的namenode
hadoop-daemon.sh start namenode
##在两个namenode节点都启动zkfc服务
hadoop-daemon.sh start zkfc
##在所有的datanode节点上启动datanode
hadoop-daemon.sh start datanode
版权声明:本文为panguoyuan原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。