Spark2与Oozie整合

  • Post author:
  • Post category:其他


项目背景

公司部门成立初期,现在业务是由Shell脚本编写然后定时进行运行。由于现在公司一般都是搭建的CDH,内置有Oozie。且Oozie操作简单,功能强大,并且有很好的图形化界面所以还是想要搞一下。在搞的过程中出现了一堆问题,就是spark2与Oozie的整合现在不是很成熟,网上资料也很少,基本是淌着过去的。。。

运行环境

CDH:CDH-5.13.2-1.cdh5.13.2.
Java:1.8
scala:2.11.8
spark:2.2.0

整合步骤

由于公司spark与Oozie已经安装完了,我就不再介绍CDH的安装步骤,具体可以参考我之前的文章,但是那个版本有点旧。

1.首先进入以下目录,默认的有spark的目录,但是没有spark2的目录,这时oozie在运行时就会调用spark目录下面的jar包,这是我们要创建spark2的目录。

2.运行以下命令

  • hadoop fs -mkdir /user/oozie/share/lib/lib_20180403101432/spark
  • hadoop fs -put /opt/cloudera/parcels/SPARK2-2.1.0.cloudera1-1.cdh5.7.0.p0.120904/lib/spark2/jars /user/oozie/share/lib/lib_20180403101432/spark2
  • hadoop fs -cp /user/oozie/share/lib/lib_20180403101432/spark/oozie-sharelib-spark-4.1.0-cdh5.7.0.jar /user/oozie/share/lib/lib_20180403101432/spark2
  • hadoop fs -cp /user/oozie/share/lib/lib_20180403101432/spark/oozie-sharelib-spark.jar /user/oozie/share/lib/lib_20180403101432/spark2

    目录结构如下

这里写图片描述

2 Oozie配置界面实验步骤

1,在hue4的界面上 首先点击

这里写图片描述


保存创建好的文档


这里写图片描述


然后进入hue创建workflow的界面,注意这里选择你刚才创建文档的名称。然后点击Add然后点击齿轮进行配置 如下


这里写图片描述

这里写图片描述

最后保存,提交。

配置文件提交

目录结构如下

这里写图片描述

workfow.xml 内容如下

<?xml version="1.0" encoding="UTF-8"?>
<workflow-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns="uri:oozie:workflow:0.4" xsi:schemaLocation="uri:oozie:workflow:0.4 uri:oozie:workflow:0.4"
    name="oozie_demo">  
    <start to="User_time" />
    <action name="User_time">
        <spark xmlns="uri:oozie:spark-action:0.1">
            <job-tracker>${jobTracker}</job-tracker>
            <name-node>${nameNode}</name-node> 
                        <configuration>
                        <property>
                              <name>oozie.action.sharelib.for.spark</name>
                              <value>spark2 </value>
                        </property> 
                        </configuration>    
            <master>yarn-cluster</master>
            <name>User_time_Test</name>         
                        <class>com.alading.bigdata.use_habit.test</class>
                       <jar>${nameNode}/${examplesRoot}/lib/cc.jar</jar>                                
            <spark-opts> --deploy-mode cluster --driver-memory 2G --executor-memory 4G --num-executors 5 --executor-cores 2
  </spark-opts>
        </spark>
        <ok to="end" />
        <error to="fail_kill" />
    </action>
    <kill name="fail_kill">
        <message>Job failed, error
            message[${wf:errorMessage(wf:lastErrorNode())}]</message>
    </kill>
    <end name="end" />
</workflow-app>

job.properties内容如下

nameNode=hdfs://hdp7:8020
jobTracker=hdp5:8032
queueName=default
examplesRoot=/user/zyw/oozie-oozi/user_habit
oozie.wf.application.path=${examplesRoot}
workflowpath=${examplesRoot}
userName=zyw
groupsName=supergroup
oozie.use.system.libpath=true
#jars in hdfs
oozie.libpath=/user/zyw/oozie-oozi/result
oozie.subworkflow.classpath.inheritance=true
#clean_file_num=12
#check_sleep_time=2
start=2016-11-11T02:00+0800
end=2099-07-31T23:00+0800
EXEC=spark_demo.sh
#oozie url
oozieUrl=http://192.168.115.166:11000/oozie

淌过的坑

Stack trace: ExitCodeException exitCode=1:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
        at org.apache.hadoop.util.Shell.run(Shell.java:455)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
1

我的解决方法是在hadoop创建hue相同的账户,然后执行一下命令

user add zyw

groupadd supergroup

usermod -a -G supergroup root

usermod -a -G supergroup mapred

usermod -a -G supergroup hdfs

usermod -a -G supergroup hive

usermod -a -G supergroup hue

usermod -a -G supergroup spark

usermod -a -G supergroup zyw

sudo -u hdfs hadoop fs -chmod 777 /user

以上参考

https://blog.csdn.net/wendingzhulu/article/details/53571529


Exception in thread "main" org.apache.spark.SparkException: Application application_1523881340434_0567 finished with failed status
    at org.apache.spark.deploy.yarn.Client.run(Client.scala:1106)
    at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1152)
    at org.apache.spark.deploy.yarn.Client.main(Client.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

出现这个问题,主要是代码里的 master(“local”) 没有注释。可以检查一下是否注释掉。

---

Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SparkMain], main() threw exception, Application application_1523881340434_0430 finished with failed status
org.apache.spark.SparkException: Application application_1523881340434_0430 finished with failed status
    at org.apache.spark.deploy.yarn.Client.run(Client.scala:1106)
    at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1152)
    at org.apache.spark.deploy.yarn.Client.main(Client.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    at org.apache.oozie.action.hadoop.SparkMain.runSpark(SparkMain.java:178)
    at org.apache.oozie.action.hadoop.SparkMain.run(SparkMain.java:90)
    at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:81)
    at org.apache.oozie.action.hadoop.SparkMain.main(SparkMain.java:57)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:235)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

这个错误,我也以为是jar包冲突或者缺少jar包,但是我看到错误的是scala的自带包,于是我就把我的scala版本换成了与线上相同的版本,但是还是不行。最后问大神,大神让我对比log日志里的classpath 看看运行环境的jar包有什么区别,我次奥,你们看看这是人能完成的,

这里写图片描述

这只是一部分,幸亏我懒,我又观察了我的stdout日志,发现读取的jar包存储路径都在spark的目录,说明自己配置的参数,oozie.action.sharelib.for.spark=spark2没有生效。最后修改后,over了

又个坑

由于线上集群版本更新,然后又出现了一堆的问题,又重新搭建了一套。一般系统更新后会出现以下目录,要在最新的目录重新搭建一套。

这里写图片描述

然而在配置后出现了问题,搞了好久都没搞好。最后发现是由于自己为了建目录,切换了HUE用户。然后没切换回来,真想一巴掌抽死自己。下面是报错

     ERROR org.apache.spark.deploy.yarn.ApplicationMaster  - User class threw exception: java.lang.NoSuchFieldError: METASTORE_CLIENT_SOCKET_LIFETIME
java.lang.NoSuchFieldError: METASTORE_CLIENT_SOCKET_LIFETIME
    at org.apache.spark.sql.hive.HiveUtils$.hiveClientConfigurations(HiveUtils.scala:190)
    at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:265)
    at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
    at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
    at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
		at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
    at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
		at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
		at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
		at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
		at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
		at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
		at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
		at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
		at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
		at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
		at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
    at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
		at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
    at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$getOrCreate$5.apply(SparkSession.scala:938)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$getOrCreate$5.apply(SparkSession.scala:938)
    at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
		at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
    at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
    at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
    at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
    at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:938)
    at com.alading.bigdata.use_habit.Use_time$.main(Use_time.scala:16)
    at com.alading.bigdata.use_habit.Use_time.main(Use_time.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:645)


这时看HUE用户是否正确

这里写图片描述


第152次 运行成功。。。

莫听穿林打叶声,何妨吟啸且徐行。

竹杖芒鞋轻胜马,谁怕?一蓑烟雨任平生。

料峭春风吹酒醒,微冷,山头斜照却相迎。

回首向来萧瑟处,归去,也无风雨也无晴。



版权声明:本文为qq_24908345原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。