hadoop3.0+centos7.4单机安装教程

  • Post author:
  • Post category:其他


参考教程

http://blog.csdn.net/cafebar123/article/details/73500014

相关下载

hive下载地址

http://mirrors.hust.edu.cn/apache/hive/stable-2/apache-hive-2.3.2-bin.tar.gz

hadoop下载地址

https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-3.0.0/hadoop-3.0.0.tar.gz

1. hive安装目录  /opt/hive/apache-hive-2.3.2-bin

2. hadoop安装目录 /opt/hadoop/hadoop-3.0.0

3. JDK安装目录 /usr/java/jdk1.8.0_65

4. 环境变量配置

export  JAVA_HOME=/usr/java/jdk1.8.0_65

export  HADOOP_HOME=/opt/hadoop/hadoop-3.0.0

export  HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export  HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native

export  HADOOP_OPTS=”-Djava.library.path=${HADOOP_HOME}/lib”

export  HIVE_HOME=/opt/hive/apache-hive-2.3.2-bin

export  HIVE_CONF_DIR=${HIVE_HOME}/conf

export  CLASS_PATH=.:${JAVA_HOME}/lib:${HIVE_HOME}/lib:$CLASS_PATH

export  PATH=.:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:${HIVE_HOME}/bin:$PATH

5. 让环境变量生效

source  /etc/profile

6.  vim /opt/hadoop/hadoop-3.0.0/etc/hadoop/core-site.xml 修改

<configuration>

<!– 指定HDFS老大(namenode)的通信地址 –>

<property>

<name>fs.defaultFS</name>

<value>hdfs://localhost:9000</value>

</property>

<!– 指定hadoop运行时产生文件的存储路径 –>

<property>

<name>hadoop.tmp.dir</name>

<value>/opt/hadoop/tmp</value>

</property>

</configuration>

7. vim /opt/hadoop/hadoop-3.0.0/etc/hadoop/hdfs-site.xml 修改增加以下内容

<configuration>

<property>

<name>dfs.name.dir</name>

<value>/opt/hadoop/hdfs/name</value>

<description>namenode上存储hdfs名字空间元数据 </description>

</property>

<property>

<name>dfs.data.dir</name>

<value>/opt/hadoop/hdfs/data</value>

<description>datanode上数据块的物理存储位置</description>

</property>

<!– 设置hdfs副本数量 –>

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

</configuration>

8. SSH 设置免密码登录

ssh-keygen -t dsa -P ” -f ~/.ssh/id_dsa

cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

chmod 0600 ~/.ssh/authorized_keys

9. 启动命令

9.1 初始化

cd /opt/hadoop/hadoop-3.0.0

./bin/hdfs namenode -format

9.2 启动命令

./sbin/start-dfs.sh

9.3 停止命令

./sbin/stop-dfs.sh

错误处理

Starting namenodes on [localhost]

ERROR: Attempting to operate on hdfs namenode as root

ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.

Starting datanodes

ERROR: Attempting to operate on hdfs datanode as root

ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.

Starting secondary namenodes [bogon]

ERROR: Attempting to operate on hdfs secondarynamenode as root

ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.

处理1

$ vim sbin/start-dfs.sh

$ vim sbin/stop-dfs.sh

两处增加以下内容

HDFS_DATANODE_USER=root

HADOOP_SECURE_DN_USER=hdfs

HDFS_NAMENODE_USER=root

HDFS_SECONDARYNAMENODE_USER=root

处理2

$ vim sbin/start-yarn.sh

$ vim sbin/stop-yarn.sh

两处增加以下内容

YARN_RESOURCEMANAGER_USER=root

HADOOP_SECURE_DN_USER=yarn

YARN_NODEMANAGER_USER=root

10. 验证安装

http://192.168.50.48:9870/dfshealth.html#tab-overview



版权声明:本文为cjh365047871原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。