今天在测试spark-sql运行在yarn上的过程中,无意间从日志中发现了一个问题:
14/12/29 15:23:17 info client: requesting a new application from cluster with 1 nodemanagers 14/12/29 15:23:17 info client: verifying our application has not requested more than the maximum memory capability of the cluster (8192 mb per container) 14/12/29 15:23:17 info client: will allocate am container, with 896 mb memory including 384 mb overhead 14/12/29 15:23:17 info client: setting up container launch context for our am 14/12/29 15:23:17 info client: preparing resources for our am container 14/12/29 15:23:17 info client: uploading resource file:/home/spark/software/source/compile/deploy_spark/assembly/target/scala-2.10/spark-assembly-1.3.0-snapshot-hadoop2.3.0-cdh5.0.0.jar -> hdfs://hadoop000:8020/user/spark/.sparkstaging/application_1416381870014_0093/spark-assembly-1.3.0-snapshot-hadoop2.3.0-cdh5.0.0.jar 14/12/29 15:23:18 info client: setting up the launch environment for our am container
再开启一个spark-sql命令行,从日志中再次发现:
14/12/29 15:24:03 info client: requesting a new application from cluster with 1 nodemanagers 14/12/29 15:24:03 info client: verifying our application has not requested more than the maximum memory capability of the cluster (8192 mb per container) 14/12/29 15:24:03 info client: will allocate am container, with 896 mb memory including 384 mb overhead 14/12/29 15:24:03 info client: setting up container launch context for our am 14/12/29 15:24:03 info client: preparing resources for our am container 14/12/29 15:24:03 info client: uploading resource file:/home/spark/software/source/compile/deploy_spark/assembly/target/scala-2.10/spark-assembly-1.3.0-snapshot-hadoop2.3.0-cdh5.0.0.jar -> hdfs://hadoop000:8020/user/spark/.sparkstaging/application_1416381870014_0094/spark-assembly-1.3.0-snapshot-hadoop2.3.0-cdh5.0.0.jar 14/12/29 15:24:05 info client: setting up the launch environment for our am container
然后查看hdfs上的文件:
hadoop fs -ls hdfs://hadoop000:8020/user/spark/.sparkstaging/
drwx------ - spark supergroup 0 2014-12-29 15:23 hdfs://hadoop000:8020/user/spark/.sparkstaging/application_1416381870014_0093 drwx------ - spark supergroup 0 2014-12-29 15:24 hdfs://hadoop000:8020/user/spark/.sparkstaging/application_1416381870014_0094
每个application都会上传一个spark-assembly-x.x.x-snapshot-hadoopx.x.x-cdhx.x.x.jar的jar包,影响hdfs的性能以及占用hdfs的空间。
在spark文档(http://spark.apache.org/docs/latest/running-on-yarn.html)中发现spark.yarn.jar属性,将spark-assembly-xxxxx.jar存放在hdfs://hadoop000:8020/spark_lib/下
在spark-defaults.conf添加属性配置:
spark.yarn.jar hdfs://hadoop000:8020/spark_lib/spark-assembly-1.3.0-snapshot-hadoop2.3.0-cdh5.0.0.jar
再次启动spark-sql --master yarn观察日志:
14/12/29 15:39:02 info client: requesting a new application from cluster with 1 nodemanagers 14/12/29 15:39:02 info client: verifying our application has not requested more than the maximum memory capability of the cluster (8192 mb per container) 14/12/29 15:39:02 info client: will allocate am container, with 896 mb memory including 384 mb overhead 14/12/29 15:39:02 info client: setting up container launch context for our am 14/12/29 15:39:02 info client: preparing resources for our am container 14/12/29 15:39:02 info client: source and destination file systems are the same. not copying hdfs://hadoop000:8020/spark_lib/spark-assembly-1.3.0-snapshot-hadoop2.3.0-cdh5.0.0.jar 14/12/29 15:39:02 info client: setting up the launch environment for our am container
观察hdfs上文件
hadoop fs -ls hdfs://hadoop000:8020/user/spark/.sparkstaging/application_1416381870014_0097
该application对应的目录下没有spark-assembly-xxxxx.jar了,从而节省assembly包上传的过程以及hdfs空间占用。
我在测试过程中遇到了类似如下的错误:
application application_xxxxxxxxx_yyyy failed 2 times due to am container for application_xxxxxxxxx_yyyy
exited with exitcode: -1000 due to: java.io.filenotfoundexception: file /tmp/hadoop-spark/nm-local-dir/filecache does not exist
在/tmp/hadoop-spark/nm-local-dir路径下创建filecache文件夹即可解决报错问题。
posted on 2016-05-26 14:11
simone 阅读(997)
评论(0) 编辑 收藏 所属分类:
spark