使用 SparkLauncher 来启动 Spark 作业

使用 SparkLauncher 来启动 Spark 作业,Spark作业通常使用spark-submit命令来启动,但是有时我们需要在一个 Spark 作业中启动一个或多个子的 Spark 作业。
通常来说有如下几种实现方式:
方式一:使用 scala 或者 Java 调用 shell 密令 执行 spark-submit 来启动一个新的作业
方式二:使用 Spark 提供的 SparkLauncher 接口类来启动一个新的作业

Spark 作业并发方式

使用 spark-submit 密令

Spark作业通常使用spark-submit命令来启动,而 spark-submit 是以Local模式调用启动的,因此父作业的–deploy-mode只能已 client 模式启动。

使用 SparkLauncher 接口类

使用 SparkLauncher 接口类能避免spark-submit方式中的问题, 能实现使 Spark 父作业和子作业 均以 cluster 模式启动。

SparkLauncher 测试说明

通常情况下使用SparkLauncher来启动作业还需指定作业的运行操作系统用户,否则会发生没有操作权限的问题

测试代码如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
import org.apache.spark.internal.Logging
import org.apache.spark.sql.SparkSession

object MySparkLauncher extends Logging{
def main(args: Array[String]): Unit = {
val spark = SparkSession.builder().appName("MySparkLauncher").master("yarn").getOrCreate()
import org.apache.spark.launcher.{SparkAppHandle, SparkLauncher}

val jar = args(0)
val sparkHome = args(1)
logInfo(s"-2--System.getenv(SPARK_USER)-- [${System.getenv("SPARK_USER")}]")
val env: java.util.Map[String, String] = new java.util.HashMap[String, String]
env.put("HADOOP_USER_NAME", "hdfs") // 子的Spark作业的运行用户
val launcher = new SparkLauncher(env) // HADOOP_USER_NAME 只能以 SparkLauncher(env) 方式添加不能以 setConf 方法添加,也不能使用System.setProperty()方式添加
launcher.setAppName("MySparkJob")
launcher.setMaster("yarn")
launcher.setSparkHome(sparkHome)
launcher.setDeployMode("cluster") //cluster
launcher.setAppResource(jar)
launcher.setMainClass("MySparkJob")
launcher.setConf("spark.driver.memory", "1G")
launcher.setConf("spark.driver.memory", "1G")
launcher.setConf("spark.driver.cores", "1")
launcher.setConf("spark.executor.memory", "1G")
launcher.setConf("spark.executor.cores", "1")
launcher.setConf("spark.executor.instances", "1")
launcher.addAppArgs(args(2),args(3))
launcher.setVerbose(true)

val handle = launcher.startApplication(
new SparkAppHandle.Listener() {
override def stateChanged(handle: SparkAppHandle): Unit = {
System.out.println("********** state changed **********")
}
override def infoChanged(handle: SparkAppHandle): Unit = {
System.out.println("********** info changed **********")
}
}
)
while ( {
!"FINISHED".equalsIgnoreCase(handle.getState.toString) && !"FAILED".equalsIgnoreCase(handle.getState.toString)
}) {
System.out.println("id " + handle.getAppId)
System.out.println("state " + handle.getState)
try
Thread.sleep(10000)
catch {
case e: InterruptedException =>
e.printStackTrace()
}
}
spark.stop()

}
}

object MySparkJob extends Logging{
def main(args: Array[String]): Unit = {
val spark = SparkSession.builder().master("yarn").getOrCreate()
logInfo(s"Started MySparkJob--Arg0[${args(0)}]-Arg1[${args(1)}]")
Thread.sleep(100L)
spark.stop()
}
}

启动参数

1
[manager@bdas1 etl]$ spark2-submit --master yarn --deploy-mode cluster --driver-memory 8G --driver-cores 4 --executor-memory 8G --executor-cores 4 --num-executors 5 --class  "MySparkLauncher"  hdfs:/user/manager/pcliu/SparkETL-2.11.8-2.3.0-assembly-0.1.jar hdfs:/user/manager/pcliu/SparkETL-2.11.8-2.3.0-assembly-0.1.jar /opt/cloudera/parcels/SPARK2-2.3.0.cloudera2-1.cdh5.13.3.p0.316101/lib/spark2 A B

测试结果

image

MySparkLauncher 日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
 Log Type: stderr

Log Upload Time: Fri Aug 30 13:21:43 +0800 2019

Log Length: 57412

19/08/30 13:21:15 INFO util.SignalUtils: Registered signal handler for TERM
19/08/30 13:21:15 INFO util.SignalUtils: Registered signal handler for HUP
19/08/30 13:21:15 INFO util.SignalUtils: Registered signal handler for INT
19/08/30 13:21:15 INFO spark.SecurityManager: Changing view acls to: yarn,manager
19/08/30 13:21:15 INFO spark.SecurityManager: Changing modify acls to: yarn,manager
19/08/30 13:21:15 INFO spark.SecurityManager: Changing view acls groups to:
19/08/30 13:21:15 INFO spark.SecurityManager: Changing modify acls groups to:
19/08/30 13:21:15 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, manager); groups with view permissions: Set(); users with modify permissions: Set(yarn, manager); groups with modify permissions: Set()
19/08/30 13:21:16 INFO yarn.ApplicationMaster: Preparing Local resources
19/08/30 13:21:17 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1565250810361_0925_000001
19/08/30 13:21:17 INFO yarn.ApplicationMaster: Starting the user application in a separate Thread
19/08/30 13:21:17 INFO yarn.ApplicationMaster: Waiting for spark context initialization...
19/08/30 13:21:17 INFO spark.SparkContext: Running Spark version 2.3.0.cloudera2
19/08/30 13:21:17 INFO spark.SparkContext: Submitted application: MySparkLauncher
19/08/30 13:21:17 INFO spark.SecurityManager: Changing view acls to: yarn,manager
19/08/30 13:21:17 INFO spark.SecurityManager: Changing modify acls to: yarn,manager
19/08/30 13:21:17 INFO spark.SecurityManager: Changing view acls groups to:
19/08/30 13:21:17 INFO spark.SecurityManager: Changing modify acls groups to:
19/08/30 13:21:17 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, manager); groups with view permissions: Set(); users with modify permissions: Set(yarn, manager); groups with modify permissions: Set()
19/08/30 13:21:17 INFO util.Utils: Successfully started service 'sparkDriver' on port 9084.
19/08/30 13:21:17 INFO spark.SparkEnv: Registering MapOutputTracker
19/08/30 13:21:17 INFO spark.SparkEnv: Registering BlockManagerMaster
19/08/30 13:21:17 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
19/08/30 13:21:17 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
19/08/30 13:21:17 INFO storage.DiskBlockManager: Created local directory at /data1/yarn/nm/usercache/manager/appcache/application_1565250810361_0925/blockmgr-01888769-3951-4feb-89ec-606a321ab6fa
19/08/30 13:21:17 INFO memory.MemoryStore: MemoryStore started with capacity 4.1 GB
19/08/30 13:21:17 INFO spark.SparkEnv: Registering OutputCommitCoordinator
19/08/30 13:21:17 INFO util.log: Logging initialized @2920ms
19/08/30 13:21:17 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
19/08/30 13:21:18 INFO server.Server: jetty-9.3.z-SNAPSHOT
19/08/30 13:21:18 INFO server.Server: Started @3018ms
19/08/30 13:21:18 INFO server.AbstractConnector: Started ServerConnector@3e8ac4d0{HTTP/1.1,[http/1.1]}{0.0.0.0:22375}
19/08/30 13:21:18 INFO util.Utils: Successfully started service 'SparkUI' on port 22375.
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@73fc1e38{/jobs,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5e2ca67d{/jobs/json,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@53f044a1{/jobs/job,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@740ca91b{/jobs/job/json,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@56d988c9{/stages,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2d0dfe76{/stages/json,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5317d01d{/stages/stage,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5b699684{/stages/stage/json,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7cda330f{/stages/pool,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@44d78211{/stages/pool/json,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7ed2e2a9{/storage,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3ad65c96{/storage/json,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6ff5a63f{/storage/rdd,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6966c7d9{/storage/rdd/json,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@73bc3127{/environment,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2d4a5312{/environment/json,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5b5887ee{/executors,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3e766e00{/executors/json,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@15524aa7{/executors/threadDump,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@227725b{/executors/threadDump/json,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@590efae3{/static,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@e6825f5{/,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7e1c79fa{/api,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7f449b0a{/jobs/job/kill,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@11ddc8bd{/stages/stage/kill,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://bdas6.hadoop0.cupdata.com:22375
19/08/30 13:21:18 INFO cluster.YarnClusterScheduler: Created YarnClusterScheduler
19/08/30 13:21:18 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1565250810361_0925 and attemptId Some(appattempt_1565250810361_0925_000001)
19/08/30 13:21:18 INFO util.Utils: Using initial executors = 5, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
19/08/30 13:21:18 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 56854.
19/08/30 13:21:18 INFO netty.NettyBlockTransferService: Server created on bdas6.hadoop0.cupdata.com:56854
19/08/30 13:21:18 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
19/08/30 13:21:18 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, bdas6.hadoop0.cupdata.com, 56854, None)
19/08/30 13:21:18 INFO storage.BlockManagerMasterEndpoint: Registering block manager bdas6.hadoop0.cupdata.com:56854 with 4.1 GB RAM, BlockManagerId(driver, bdas6.hadoop0.cupdata.com, 56854, None)
19/08/30 13:21:18 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, bdas6.hadoop0.cupdata.com, 56854, None)
19/08/30 13:21:18 INFO storage.BlockManager: external shuffle service port = 7337
19/08/30 13:21:18 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, bdas6.hadoop0.cupdata.com, 56854, None)
19/08/30 13:21:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1e608ab9{/metrics/json,null,AVAILABLE,@Spark}
19/08/30 13:21:18 INFO scheduler.EventLoggingListener: Logging events to hdfs://nameservice1/user/spark/spark2ApplicationHistory/application_1565250810361_0925_1
19/08/30 13:21:18 INFO util.Utils: Using initial executors = 5, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
19/08/30 13:21:18 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
19/08/30 13:21:18 INFO spark.SparkContext: Registered listener com.cloudera.spark.lineage.NavigatorAppListener
19/08/30 13:21:18 INFO yarn.ApplicationMaster:
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>{{HADOOP_COMMON_HOME}}/../../../SPARK2-2.3.0.cloudera2-1.cdh5.13.3.p0.316101/lib/spark2/jars/*<CPS>$HADOOP_CLIENT_CONF_DIR<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/*<CPS>$HADOOP_COMMON_HOME/lib/*<CPS>$HADOOP_HDFS_HOME/*<CPS>$HADOOP_HDFS_HOME/lib/*<CPS>$HADOOP_YARN_HOME/*<CPS>$HADOOP_YARN_HOME/lib/*<CPS>$HADOOP_MAPRED_HOME/*<CPS>$HADOOP_MAPRED_HOME/lib/*<CPS>$MR2_CLASSPATH<CPS>{{HADOOP_COMMON_HOME}}/../../../SPARK2-2.3.0.cloudera2-1.cdh5.13.3.p0.316101/lib/spark2/kafka-0.9/*:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/activation-1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/aopalliance-1.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/apacheds-i18n-2.0.0-M15.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/apacheds-kerberos-codec-2.0.0-M15.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/api-asn1-api-1.0.0-M20.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/api-util-1.0.0-M20.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/asm-3.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/avro-1.7.6-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/aws-java-sdk-bundle-1.11.134.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/azure-data-lake-store-sdk-2.2.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-beanutils-1.9.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-beanutils-core-1.8.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-codec-1.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-configuration-1.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-daemon-1.0.13.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-digester-1.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-el-1.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-math3-3.1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-net-3.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/curator-client-2.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/curator-framework-2.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/curator-recipes-2.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/guava-11.0.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/guice-3.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-annotations-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-ant-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-archive-logs-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-archives-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-auth-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-aws-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-azure-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-azure-datalake-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-common-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-datajoin-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-distcp-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-extras-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-gridmix-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-hdfs-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-hdfs-nfs-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-app-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-common-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-core-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-hs-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-nativetask-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-shuffle-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-examples-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-nfs-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-openstack-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-rumen-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-sls-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-streaming-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-api-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-client-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-common-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-registry-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-server-common-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-server-nodemanager-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-server-web-proxy-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hamcrest-core-1.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/htrace-core4-4.0.1-incubating.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/httpclient-4.2.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/httpcore-4.2.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hue-plugins-3.9.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jackson-annotations-2.2.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jackson-core-2.2.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jackson-core-asl-1.8.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jackson-databind-2.2.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jackson-mapper-asl-1.8.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jasper-compiler-5.5.23.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jasper-runtime-5.5.23.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/java-xmlbuilder-0.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/javax.inject-1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jaxb-api-2.2.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jaxb-impl-2.2.3-1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jets3t-0.9.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jettison-1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jline-2.11.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jsch-0.1.42.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jsp-api-2.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jsr305-3.0.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/leveldbjni-all-1.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/log4j-1.2.17.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/metrics-core-3.0.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/microsoft-windowsazure-storage-sdk-0.6.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/mockito-all-1.8.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/netty-3.10.5.Final.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/okhttp-2.4.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/okio-1.4.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/paranamer-2.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/protobuf-java-2.5.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/slf4j-api-1.7.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/slf4j-log4j12-1.7.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/snappy-java-1.0.4.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/spark-1.6.0-cdh5.14.2-yarn-shuffle.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/stax-api-1.0-2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/xercesImpl-2.9.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/xml-apis-1.3.04.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/xmlenc-0.52.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/zookeeper-3.4.5-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/LICENSE.txt:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/NOTICE.txt<CPS>{{PWD}}/__spark_conf__/__hadoop_conf__
SPARK_DIST_CLASSPATH -> /opt/cloudera/parcels/SPARK2-2.3.0.cloudera2-1.cdh5.13.3.p0.316101/lib/spark2/kafka-0.9/*:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/avro-1.7.6-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/aws-java-sdk-bundle-1.11.134.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/azure-data-lake-store-sdk-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-beanutils-1.9.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-daemon-1.0.13.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-annotations-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-ant-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-archive-logs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-archives-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-auth-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-aws-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-azure-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-azure-datalake-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-datajoin-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-distcp-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-extras-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-gridmix-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-hdfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-hdfs-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-app-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-core-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-hs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-nativetask-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-shuffle-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-examples-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-openstack-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-rumen-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-sls-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-streaming-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-api-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-client-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-registry-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-server-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-server-nodemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-server-web-proxy-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hue-plugins-3.9.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jackson-annotations-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jackson-core-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jackson-databind-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jline-2.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/metrics-core-3.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/microsoft-windowsazure-storage-sdk-0.6.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/okhttp-2.4.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/okio-1.4.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/slf4j-log4j12-1.7.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/spark-1.6.0-cdh5.14.2-yarn-shuffle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/xercesImpl-2.9.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/xml-apis-1.3.04.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/zookeeper-3.4.5-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/LICENSE.txt:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/NOTICE.txt
SPARK_YARN_STAGING_DIR -> *********(redacted)
SPARK_USER -> *********(redacted)

command:
LD_LIBRARY_PATH="{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/native:$LD_LIBRARY_PATH" \
{{JAVA_HOME}}/bin/java \
-server \
-Xmx8192m \
-Djava.io.tmpdir={{PWD}}/tmp \
'-Dspark.authenticate=false' \
'-Dspark.network.crypto.enabled=false' \
'-Dspark.shuffle.service.port=7337' \
-Dspark.yarn.app.container.log.dir=<LOG_DIR> \
-XX:OnOutOfMemoryError='kill %p' \
org.apache.spark.executor.CoarseGrainedExecutorBackend \
--driver-url \
spark://CoarseGrainedScheduler@bdas6.hadoop0.cupdata.com:9084 \
--executor-id \
<executorId> \
--hostname \
<hostname> \
--cores \
4 \
--app-id \
application_1565250810361_0925 \
--user-class-path \
file:$PWD/__app__.jar \
1><LOG_DIR>/stdout \
2><LOG_DIR>/stderr

resources:
__app__.jar -> resource { scheme: "hdfs" host: "nameservice1" port: -1 file: "/user/manager/pcliu/SparkETL-2.11.8-2.3.0-assembly-0.1.jar" } size: 17294521 timestamp: 1567142467454 type: FILE visibility: PUBLIC
__spark_conf__ -> resource { scheme: "hdfs" host: "nameservice1" port: -1 file: "/user/manager/.sparkStaging/application_1565250810361_0925/__spark_conf__.zip" } size: 156605 timestamp: 1567142472816 type: ARCHIVE visibility: PRIVATE

===============================================================================
19/08/30 13:21:18 INFO client.RMProxy: Connecting to ResourceManager at bdas1.hadoop0.cupdata.com/10.192.247.221:8030
19/08/30 13:21:18 INFO yarn.YarnRMClient: Registering the ApplicationMaster
19/08/30 13:21:18 INFO util.Utils: Using initial executors = 5, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
19/08/30 13:21:18 INFO yarn.YarnAllocator: Will request 5 executor container(s), each with 4 core(s) and 9011 MB memory (including 819 MB of overhead)
19/08/30 13:21:18 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark://YarnAM@bdas6.hadoop0.cupdata.com:9084)
19/08/30 13:21:18 INFO yarn.YarnAllocator: Submitted 5 unlocalized container requests.
19/08/30 13:21:18 INFO yarn.ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
19/08/30 13:21:19 INFO yarn.YarnAllocator: Launching container container_1565250810361_0925_01_000002 on host bdas6.hadoop0.cupdata.com for executor with ID 1
19/08/30 13:21:19 INFO yarn.YarnAllocator: Launching container container_1565250810361_0925_01_000003 on host bdas6.hadoop0.cupdata.com for executor with ID 2
19/08/30 13:21:19 INFO yarn.YarnAllocator: Launching container container_1565250810361_0925_01_000004 on host bdas6.hadoop0.cupdata.com for executor with ID 3
19/08/30 13:21:19 INFO yarn.YarnAllocator: Launching container container_1565250810361_0925_01_000005 on host bdas6.hadoop0.cupdata.com for executor with ID 4
19/08/30 13:21:19 INFO yarn.YarnAllocator: Launching container container_1565250810361_0925_01_000006 on host bdas6.hadoop0.cupdata.com for executor with ID 5
19/08/30 13:21:19 INFO yarn.YarnAllocator: Received 5 containers from YARN, launching executors on 5 of them.
19/08/30 13:21:21 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.192.247.226:19825) with ID 1
19/08/30 13:21:21 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.192.247.226:19826) with ID 4
19/08/30 13:21:21 INFO spark.ExecutorAllocationManager: New executor 1 has registered (new total is 1)
19/08/30 13:21:21 INFO spark.ExecutorAllocationManager: New executor 4 has registered (new total is 2)
19/08/30 13:21:21 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.192.247.226:19823) with ID 2
19/08/30 13:21:21 INFO spark.ExecutorAllocationManager: New executor 2 has registered (new total is 3)
19/08/30 13:21:21 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.192.247.226:19824) with ID 3
19/08/30 13:21:21 INFO spark.ExecutorAllocationManager: New executor 3 has registered (new total is 4)
19/08/30 13:21:21 INFO storage.BlockManagerMasterEndpoint: Registering block manager bdas6.hadoop0.cupdata.com:18552 with 4.1 GB RAM, BlockManagerId(4, bdas6.hadoop0.cupdata.com, 18552, None)
19/08/30 13:21:21 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.192.247.226:19827) with ID 5
19/08/30 13:21:21 INFO spark.ExecutorAllocationManager: New executor 5 has registered (new total is 5)
19/08/30 13:21:21 INFO storage.BlockManagerMasterEndpoint: Registering block manager bdas6.hadoop0.cupdata.com:36933 with 4.1 GB RAM, BlockManagerId(1, bdas6.hadoop0.cupdata.com, 36933, None)
19/08/30 13:21:21 INFO storage.BlockManagerMasterEndpoint: Registering block manager bdas6.hadoop0.cupdata.com:15054 with 4.1 GB RAM, BlockManagerId(2, bdas6.hadoop0.cupdata.com, 15054, None)
19/08/30 13:21:21 INFO cluster.YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
19/08/30 13:21:21 INFO cluster.YarnClusterScheduler: YarnClusterScheduler.postStartHook done
19/08/30 13:21:21 INFO storage.BlockManagerMasterEndpoint: Registering block manager bdas6.hadoop0.cupdata.com:22309 with 4.1 GB RAM, BlockManagerId(3, bdas6.hadoop0.cupdata.com, 22309, None)
19/08/30 13:21:22 INFO storage.BlockManagerMasterEndpoint: Registering block manager bdas6.hadoop0.cupdata.com:15645 with 4.1 GB RAM, BlockManagerId(5, bdas6.hadoop0.cupdata.com, 15645, None)
19/08/30 13:21:22 INFO MySparkLauncher: -2--System.getenv(SPARK_USER)-- [manager]
19/08/30 13:21:22 INFO app.MySparkJob: Using properties file: /opt/cloudera/parcels/SPARK2-2.3.0.cloudera2-1.cdh5.13.3.p0.316101/lib/spark2/conf/spark-defaults.conf
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.lineage.log.dir=/hadoop/log/spark2/lineage
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.serializer=org.apache.spark.serializer.KryoSerializer
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.yarn.jars=local:/opt/cloudera/parcels/SPARK2-2.3.0.cloudera2-1.cdh5.13.3.p0.316101/lib/spark2/jars/*
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.eventLog.enabled=true
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.shuffle.service.enabled=true
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.extraListeners=com.cloudera.spark.lineage.NavigatorAppListener
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.driver.extraLibraryPath=/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/native
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.lineage.enabled=true
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.yarn.historyServer.address=http://bdas5.hadoop0.cupdata.com:18089
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.ui.enabled=true
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.ui.killEnabled=true
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.sql.hive.metastore.jars=${env:HADOOP_COMMON_HOME}/../hive/lib/*:${env:HADOOP_COMMON_HOME}/client/*
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.dynamicAllocation.schedulerBacklogTimeout=1
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.yarn.am.extraLibraryPath=/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/native
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.driver.memory=5g
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.yarn.config.gatewayPath=/opt/cloudera/parcels
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.sql.queryExecutionListeners=com.cloudera.spark.lineage.NavigatorQueryListener
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.yarn.config.replacementPath={{HADOOP_COMMON_HOME}}/../../..
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.submit.deployMode=client
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.shuffle.service.port=7337
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.master=yarn
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.authenticate=false
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.network.crypto.enabled=false
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.executor.extraLibraryPath=/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/native
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.executor.memory=10g
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.io.encryption.enabled=false
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.eventLog.dir=*********(redacted)
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.dynamicAllocation.enabled=true
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.sql.catalogImplementation=hive
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.dynamicAllocation.minExecutors=0
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.dynamicAllocation.executorIdleTimeout=60
19/08/30 13:21:22 INFO app.MySparkJob: Adding default property: spark.sql.hive.metastore.version=1.1.0
19/08/30 13:21:22 INFO app.MySparkJob: Parsed arguments:
19/08/30 13:21:22 INFO app.MySparkJob: master yarn
19/08/30 13:21:22 INFO app.MySparkJob: deployMode cluster
19/08/30 13:21:22 INFO app.MySparkJob: executorMemory 1G
19/08/30 13:21:22 INFO app.MySparkJob: executorCores 1
19/08/30 13:21:22 INFO app.MySparkJob: totalExecutorCores null
19/08/30 13:21:22 INFO app.MySparkJob: propertiesFile /opt/cloudera/parcels/SPARK2-2.3.0.cloudera2-1.cdh5.13.3.p0.316101/lib/spark2/conf/spark-defaults.conf
19/08/30 13:21:22 INFO app.MySparkJob: driverMemory 1G
19/08/30 13:21:22 INFO app.MySparkJob: driverCores 1
19/08/30 13:21:22 INFO app.MySparkJob: driverExtraClassPath null
19/08/30 13:21:22 INFO app.MySparkJob: driverExtraLibraryPath /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/native
19/08/30 13:21:22 INFO app.MySparkJob: driverExtraJavaOptions null
19/08/30 13:21:22 INFO app.MySparkJob: supervise false
19/08/30 13:21:22 INFO app.MySparkJob: queue null
19/08/30 13:21:22 INFO app.MySparkJob: numExecutors 1
19/08/30 13:21:22 INFO app.MySparkJob: files null
19/08/30 13:21:22 INFO app.MySparkJob: pyFiles null
19/08/30 13:21:22 INFO app.MySparkJob: archives null
19/08/30 13:21:22 INFO app.MySparkJob: mainClass MySparkJob
19/08/30 13:21:22 INFO app.MySparkJob: primaryResource hdfs:/user/manager/pcliu/SparkETL-2.11.8-2.3.0-assembly-0.1.jar
19/08/30 13:21:22 INFO app.MySparkJob: name MySparkJob
19/08/30 13:21:22 INFO app.MySparkJob: childArgs [A B]
19/08/30 13:21:22 INFO app.MySparkJob: jars null
19/08/30 13:21:22 INFO app.MySparkJob: packages null
19/08/30 13:21:22 INFO app.MySparkJob: packagesExclusions null
19/08/30 13:21:22 INFO app.MySparkJob: repositories null
19/08/30 13:21:22 INFO app.MySparkJob: verbose true
19/08/30 13:21:22 INFO app.MySparkJob:
19/08/30 13:21:22 INFO app.MySparkJob: Spark properties used, including those specified through
19/08/30 13:21:22 INFO app.MySparkJob: --conf and those from the properties file /opt/cloudera/parcels/SPARK2-2.3.0.cloudera2-1.cdh5.13.3.p0.316101/lib/spark2/conf/spark-defaults.conf:
19/08/30 13:21:22 INFO app.MySparkJob: (spark.sql.queryExecutionListeners,com.cloudera.spark.lineage.NavigatorQueryListener)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.executor.extraLibraryPath,/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/native)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.lineage.enabled,true)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.driver.memory,1G)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.executor.memory,1G)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.authenticate,false)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.executor.instances,1)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.yarn.jars,local:/opt/cloudera/parcels/SPARK2-2.3.0.cloudera2-1.cdh5.13.3.p0.316101/lib/spark2/jars/*)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.driver.extraLibraryPath,/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/native)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.yarn.am.extraLibraryPath,/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/native)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.yarn.historyServer.address,http://bdas5.hadoop0.cupdata.com:18089)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.eventLog.enabled,true)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.dynamicAllocation.schedulerBacklogTimeout,1)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.yarn.config.gatewayPath,/opt/cloudera/parcels)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.driver.cores,1)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.ui.killEnabled,true)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.serializer,org.apache.spark.serializer.KryoSerializer)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.dynamicAllocation.executorIdleTimeout,60)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.dynamicAllocation.minExecutors,0)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.shuffle.service.enabled,true)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.yarn.config.replacementPath,{{HADOOP_COMMON_HOME}}/../../..)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.ui.enabled,true)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.io.encryption.enabled,false)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.sql.hive.metastore.version,1.1.0)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.submit.deployMode,client)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.shuffle.service.port,7337)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.network.crypto.enabled,false)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.eventLog.dir,*********(redacted))
19/08/30 13:21:22 INFO app.MySparkJob: (spark.master,yarn)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.extraListeners,com.cloudera.spark.lineage.NavigatorAppListener)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.lineage.log.dir,/hadoop/log/spark2/lineage)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.dynamicAllocation.enabled,true)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.sql.catalogImplementation,hive)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.executor.cores,1)
19/08/30 13:21:22 INFO app.MySparkJob: (spark.sql.hive.metastore.jars,${env:HADOOP_COMMON_HOME}/../hive/lib/*:${env:HADOOP_COMMON_HOME}/client/*)
19/08/30 13:21:22 INFO app.MySparkJob:
19/08/30 13:21:22 INFO app.MySparkJob:
19/08/30 13:21:23 INFO app.MySparkJob: Main class:
19/08/30 13:21:23 INFO app.MySparkJob: org.apache.spark.deploy.yarn.YarnClusterApplication
19/08/30 13:21:23 INFO app.MySparkJob: Arguments:
19/08/30 13:21:23 INFO app.MySparkJob: --jar
19/08/30 13:21:23 INFO app.MySparkJob: hdfs:/user/manager/pcliu/SparkETL-2.11.8-2.3.0-assembly-0.1.jar
19/08/30 13:21:23 INFO app.MySparkJob: --class
19/08/30 13:21:23 INFO app.MySparkJob: MySparkJob
19/08/30 13:21:23 INFO app.MySparkJob: --arg
19/08/30 13:21:23 INFO app.MySparkJob: A
19/08/30 13:21:23 INFO app.MySparkJob: --arg
19/08/30 13:21:23 INFO app.MySparkJob: B
19/08/30 13:21:23 INFO app.MySparkJob: Spark config:
19/08/30 13:21:23 INFO app.MySparkJob: (spark.lineage.log.dir,/hadoop/log/spark2/lineage)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.serializer,org.apache.spark.serializer.KryoSerializer)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.yarn.jars,local:/opt/cloudera/parcels/SPARK2-2.3.0.cloudera2-1.cdh5.13.3.p0.316101/lib/spark2/jars/*)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.eventLog.enabled,true)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.shuffle.service.enabled,true)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.extraListeners,com.cloudera.spark.lineage.NavigatorAppListener)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.driver.extraLibraryPath,/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/native)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.lineage.enabled,true)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.yarn.historyServer.address,http://bdas5.hadoop0.cupdata.com:18089)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.ui.enabled,true)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.app.name,MySparkJob)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.ui.killEnabled,true)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.sql.hive.metastore.jars,${env:HADOOP_COMMON_HOME}/../hive/lib/*:${env:HADOOP_COMMON_HOME}/client/*)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.dynamicAllocation.schedulerBacklogTimeout,1)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.yarn.am.extraLibraryPath,/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/native)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.driver.memory,1G)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.executor.instances,1)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.yarn.config.gatewayPath,/opt/cloudera/parcels)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.sql.queryExecutionListeners,com.cloudera.spark.lineage.NavigatorQueryListener)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.yarn.config.replacementPath,{{HADOOP_COMMON_HOME}}/../../..)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.driver.cores,1)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.submit.deployMode,cluster)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.shuffle.service.port,7337)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.master,yarn)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.authenticate,false)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.network.crypto.enabled,false)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.executor.extraLibraryPath,/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/native)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.executor.memory,1G)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.io.encryption.enabled,false)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.eventLog.dir,*********(redacted))
19/08/30 13:21:23 INFO app.MySparkJob: (spark.dynamicAllocation.enabled,true)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.sql.catalogImplementation,hive)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.executor.cores,1)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.dynamicAllocation.minExecutors,0)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.dynamicAllocation.executorIdleTimeout,60)
19/08/30 13:21:23 INFO app.MySparkJob: (spark.sql.hive.metastore.version,1.1.0)
19/08/30 13:21:23 INFO app.MySparkJob: Classpath elements:
19/08/30 13:21:23 INFO app.MySparkJob: hdfs:/user/manager/pcliu/SparkETL-2.11.8-2.3.0-assembly-0.1.jar
19/08/30 13:21:23 INFO app.MySparkJob:
19/08/30 13:21:23 INFO app.MySparkJob:
19/08/30 13:21:23 INFO app.MySparkJob: Warning: Skip remote jar hdfs:/user/manager/pcliu/SparkETL-2.11.8-2.3.0-assembly-0.1.jar.
19/08/30 13:21:24 INFO app.MySparkJob: 19/08/30 13:21:24 INFO client.RMProxy: Connecting to ResourceManager at bdas1.hadoop0.cupdata.com/10.192.247.221:8032
19/08/30 13:21:24 INFO app.MySparkJob: 19/08/30 13:21:24 INFO yarn.Client: Requesting a new application from cluster with 6 NodeManagers
19/08/30 13:21:24 INFO app.MySparkJob: 19/08/30 13:21:24 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (82336 MB per container)
19/08/30 13:21:24 INFO app.MySparkJob: 19/08/30 13:21:24 INFO yarn.Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead
19/08/30 13:21:24 INFO app.MySparkJob: 19/08/30 13:21:24 INFO yarn.Client: Setting up container launch context for our AM
19/08/30 13:21:24 INFO app.MySparkJob: 19/08/30 13:21:24 INFO yarn.Client: Setting up the launch environment for our AM container
19/08/30 13:21:24 INFO app.MySparkJob: 19/08/30 13:21:24 INFO yarn.Client: Preparing resources for our AM container
19/08/30 13:21:24 INFO app.MySparkJob: 19/08/30 13:21:24 INFO yarn.Client: Source and destination file systems are the same. Not copying hdfs:/user/manager/pcliu/SparkETL-2.11.8-2.3.0-assembly-0.1.jar
19/08/30 13:21:24 INFO app.MySparkJob: 19/08/30 13:21:24 INFO yarn.Client: Uploading resource file:/data1/yarn/nm/usercache/manager/appcache/application_1565250810361_0925/spark-a31235f3-5f8b-4e45-b7a4-a441c4cfcedd/__spark_conf__3542125000983363974.zip -> hdfs://nameservice1/user/hdfs/.sparkStaging/application_1565250810361_0926/__spark_conf__.zip
19/08/30 13:21:25 INFO app.MySparkJob: 19/08/30 13:21:25 INFO spark.SecurityManager: Changing view acls to: yarn,manager
19/08/30 13:21:25 INFO app.MySparkJob: 19/08/30 13:21:25 INFO spark.SecurityManager: Changing modify acls to: yarn,manager
19/08/30 13:21:25 INFO app.MySparkJob: 19/08/30 13:21:25 INFO spark.SecurityManager: Changing view acls groups to:
19/08/30 13:21:25 INFO app.MySparkJob: 19/08/30 13:21:25 INFO spark.SecurityManager: Changing modify acls groups to:
19/08/30 13:21:25 INFO app.MySparkJob: 19/08/30 13:21:25 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, manager); groups with view permissions: Set(); users with modify permissions: Set(yarn, manager); groups with modify permissions: Set()
19/08/30 13:21:26 INFO app.MySparkJob: 19/08/30 13:21:26 INFO yarn.Client: Submitting application application_1565250810361_0926 to ResourceManager
19/08/30 13:21:26 INFO app.MySparkJob: 19/08/30 13:21:26 INFO impl.YarnClientImpl: Submitted application application_1565250810361_0926
19/08/30 13:21:27 INFO app.MySparkJob: 19/08/30 13:21:27 INFO yarn.Client: Application report for application_1565250810361_0926 (state: ACCEPTED)
19/08/30 13:21:27 INFO app.MySparkJob: 19/08/30 13:21:27 INFO yarn.Client:
19/08/30 13:21:27 INFO app.MySparkJob: client token: N/A
19/08/30 13:21:27 INFO app.MySparkJob: diagnostics: N/A
19/08/30 13:21:27 INFO app.MySparkJob: ApplicationMaster host: N/A
19/08/30 13:21:27 INFO app.MySparkJob: ApplicationMaster RPC port: -1
19/08/30 13:21:27 INFO app.MySparkJob: queue: root.users.hdfs
19/08/30 13:21:27 INFO app.MySparkJob: start time: 1567142486340
19/08/30 13:21:27 INFO app.MySparkJob: final status: UNDEFINED
19/08/30 13:21:27 INFO app.MySparkJob: tracking URL: http://bdas1.hadoop0.cupdata.com:8088/proxy/application_1565250810361_0926/
19/08/30 13:21:27 INFO app.MySparkJob: user: hdfs
19/08/30 13:21:28 INFO app.MySparkJob: 19/08/30 13:21:28 INFO yarn.Client: Application report for application_1565250810361_0926 (state: ACCEPTED)
19/08/30 13:21:29 INFO app.MySparkJob: 19/08/30 13:21:29 INFO yarn.Client: Application report for application_1565250810361_0926 (state: ACCEPTED)
19/08/30 13:21:30 INFO app.MySparkJob: 19/08/30 13:21:30 INFO yarn.Client: Application report for application_1565250810361_0926 (state: ACCEPTED)
19/08/30 13:21:31 INFO app.MySparkJob: 19/08/30 13:21:31 INFO yarn.Client: Application report for application_1565250810361_0926 (state: RUNNING)
19/08/30 13:21:31 INFO app.MySparkJob: 19/08/30 13:21:31 INFO yarn.Client:
19/08/30 13:21:31 INFO app.MySparkJob: client token: N/A
19/08/30 13:21:31 INFO app.MySparkJob: diagnostics: N/A
19/08/30 13:21:31 INFO app.MySparkJob: ApplicationMaster host: 10.192.247.224
19/08/30 13:21:31 INFO app.MySparkJob: ApplicationMaster RPC port: 0
19/08/30 13:21:31 INFO app.MySparkJob: queue: root.users.hdfs
19/08/30 13:21:31 INFO app.MySparkJob: start time: 1567142486340
19/08/30 13:21:31 INFO app.MySparkJob: final status: UNDEFINED
19/08/30 13:21:31 INFO app.MySparkJob: tracking URL: http://bdas1.hadoop0.cupdata.com:8088/proxy/application_1565250810361_0926/
19/08/30 13:21:31 INFO app.MySparkJob: user: hdfs
19/08/30 13:21:32 INFO app.MySparkJob: 19/08/30 13:21:32 INFO yarn.Client: Application report for application_1565250810361_0926 (state: RUNNING)
19/08/30 13:21:33 INFO app.MySparkJob: 19/08/30 13:21:33 INFO yarn.Client: Application report for application_1565250810361_0926 (state: RUNNING)
19/08/30 13:21:34 INFO app.MySparkJob: 19/08/30 13:21:34 INFO yarn.Client: Application report for application_1565250810361_0926 (state: RUNNING)
19/08/30 13:21:35 INFO app.MySparkJob: 19/08/30 13:21:35 INFO yarn.Client: Application report for application_1565250810361_0926 (state: FINISHED)
19/08/30 13:21:35 INFO app.MySparkJob: 19/08/30 13:21:35 INFO yarn.Client:
19/08/30 13:21:35 INFO app.MySparkJob: client token: N/A
19/08/30 13:21:35 INFO app.MySparkJob: diagnostics: N/A
19/08/30 13:21:35 INFO app.MySparkJob: ApplicationMaster host: 10.192.247.224
19/08/30 13:21:35 INFO app.MySparkJob: ApplicationMaster RPC port: 0
19/08/30 13:21:35 INFO app.MySparkJob: queue: root.users.hdfs
19/08/30 13:21:35 INFO app.MySparkJob: start time: 1567142486340
19/08/30 13:21:35 INFO app.MySparkJob: final status: SUCCEEDED
19/08/30 13:21:35 INFO app.MySparkJob: tracking URL: http://bdas1.hadoop0.cupdata.com:8088/proxy/application_1565250810361_0926/
19/08/30 13:21:35 INFO app.MySparkJob: user: hdfs
19/08/30 13:21:35 INFO app.MySparkJob: 19/08/30 13:21:35 INFO util.ShutdownHookManager: Shutdown hook called
19/08/30 13:21:35 INFO app.MySparkJob: 19/08/30 13:21:35 INFO util.ShutdownHookManager: Deleting directory /data1/yarn/nm/usercache/manager/appcache/application_1565250810361_0925/spark-a31235f3-5f8b-4e45-b7a4-a441c4cfcedd
19/08/30 13:21:35 INFO app.MySparkJob: 19/08/30 13:21:35 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-29696b29-3f2d-45df-ac53-702c4f075fde
19/08/30 13:21:42 INFO server.AbstractConnector: Stopped Spark@3e8ac4d0{HTTP/1.1,[http/1.1]}{0.0.0.0:0}
19/08/30 13:21:42 INFO ui.SparkUI: Stopped Spark web UI at http://bdas6.hadoop0.cupdata.com:22375
19/08/30 13:21:42 INFO yarn.YarnAllocator: Driver requested a total number of 0 executor(s).
19/08/30 13:21:42 INFO cluster.YarnClusterSchedulerBackend: Shutting down all executors
19/08/30 13:21:42 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
19/08/30 13:21:42 INFO cluster.SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
19/08/30 13:21:42 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
19/08/30 13:21:42 INFO memory.MemoryStore: MemoryStore cleared
19/08/30 13:21:42 INFO storage.BlockManager: BlockManager stopped
19/08/30 13:21:42 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
19/08/30 13:21:42 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
19/08/30 13:21:42 INFO spark.SparkContext: Successfully stopped SparkContext
19/08/30 13:21:42 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
19/08/30 13:21:42 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
19/08/30 13:21:42 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
19/08/30 13:21:42 INFO yarn.ApplicationMaster: Deleting staging directory hdfs://nameservice1/user/manager/.sparkStaging/application_1565250810361_0925
19/08/30 13:21:42 INFO util.ShutdownHookManager: Shutdown hook called
19/08/30 13:21:42 INFO util.ShutdownHookManager: Deleting directory /data1/yarn/nm/usercache/manager/appcache/application_1565250810361_0925/spark-f11969e1-23a3-4c45-b7d8-98f2003db0ff

MySparkJob 日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
 Log Type: stderr

Log Upload Time: Fri Aug 30 13:21:36 +0800 2019

Log Length: 34500

19/08/30 13:21:27 INFO util.SignalUtils: Registered signal handler for TERM
19/08/30 13:21:27 INFO util.SignalUtils: Registered signal handler for HUP
19/08/30 13:21:27 INFO util.SignalUtils: Registered signal handler for INT
19/08/30 13:21:27 INFO spark.SecurityManager: Changing view acls to: yarn,hdfs
19/08/30 13:21:27 INFO spark.SecurityManager: Changing modify acls to: yarn,hdfs
19/08/30 13:21:27 INFO spark.SecurityManager: Changing view acls groups to:
19/08/30 13:21:27 INFO spark.SecurityManager: Changing modify acls groups to:
19/08/30 13:21:27 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hdfs); groups with view permissions: Set(); users with modify permissions: Set(yarn, hdfs); groups with modify permissions: Set()
19/08/30 13:21:28 INFO yarn.ApplicationMaster: Preparing Local resources
19/08/30 13:21:29 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1565250810361_0926_000001
19/08/30 13:21:29 INFO yarn.ApplicationMaster: Starting the user application in a separate Thread
19/08/30 13:21:29 INFO yarn.ApplicationMaster: Waiting for spark context initialization...
19/08/30 13:21:29 INFO spark.SparkContext: Running Spark version 2.3.0.cloudera2
19/08/30 13:21:29 INFO spark.SparkContext: Submitted application: MySparkJob
19/08/30 13:21:29 INFO spark.SecurityManager: Changing view acls to: yarn,hdfs
19/08/30 13:21:29 INFO spark.SecurityManager: Changing modify acls to: yarn,hdfs
19/08/30 13:21:29 INFO spark.SecurityManager: Changing view acls groups to:
19/08/30 13:21:29 INFO spark.SecurityManager: Changing modify acls groups to:
19/08/30 13:21:29 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hdfs); groups with view permissions: Set(); users with modify permissions: Set(yarn, hdfs); groups with modify permissions: Set()
19/08/30 13:21:29 INFO util.Utils: Successfully started service 'sparkDriver' on port 56002.
19/08/30 13:21:29 INFO spark.SparkEnv: Registering MapOutputTracker
19/08/30 13:21:29 INFO spark.SparkEnv: Registering BlockManagerMaster
19/08/30 13:21:29 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
19/08/30 13:21:29 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
19/08/30 13:21:29 INFO storage.DiskBlockManager: Created local directory at /hadoop/yarn/nm/usercache/hdfs/appcache/application_1565250810361_0926/blockmgr-694c401a-2d18-41d2-b1b3-4c4e3c480921
19/08/30 13:21:29 INFO memory.MemoryStore: MemoryStore started with capacity 366.3 MB
19/08/30 13:21:29 INFO spark.SparkEnv: Registering OutputCommitCoordinator
19/08/30 13:21:29 INFO util.log: Logging initialized @2827ms
19/08/30 13:21:29 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
19/08/30 13:21:29 INFO server.Server: jetty-9.3.z-SNAPSHOT
19/08/30 13:21:29 INFO server.Server: Started @2915ms
19/08/30 13:21:29 INFO server.AbstractConnector: Started ServerConnector@2129fdc5{HTTP/1.1,[http/1.1]}{0.0.0.0:46273}
19/08/30 13:21:29 INFO util.Utils: Successfully started service 'SparkUI' on port 46273.
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7a9e4b34{/jobs,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4aacd559{/jobs/json,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@45d6174e{/jobs/job,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4c690913{/jobs/job/json,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@d1aecf4{/stages,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@41acfdbd{/stages/json,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@229f4533{/stages/stage,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@686383d7{/stages/stage/json,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7490dd1f{/stages/pool,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5f3ac281{/stages/pool/json,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3b99d683{/storage,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@53246fb{/storage/json,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2538625d{/storage/rdd,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1387521c{/storage/rdd/json,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@21a72b21{/environment,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@226950ab{/environment/json,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2167a604{/executors,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7da32c4c{/executors/json,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@533ad76e{/executors/threadDump,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1cc1b85c{/executors/threadDump/json,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5e0b90f1{/static,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@725a27d5{/,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5416791a{/api,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1c1b6be5{/jobs/job/kill,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@30540542{/stages/stage/kill,null,AVAILABLE,@Spark}
19/08/30 13:21:29 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://bdas4.hadoop0.cupdata.com:46273
19/08/30 13:21:29 INFO cluster.YarnClusterScheduler: Created YarnClusterScheduler
19/08/30 13:21:29 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1565250810361_0926 and attemptId Some(appattempt_1565250810361_0926_000001)
19/08/30 13:21:29 INFO util.Utils: Using initial executors = 1, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
19/08/30 13:21:29 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 57445.
19/08/30 13:21:29 INFO netty.NettyBlockTransferService: Server created on bdas4.hadoop0.cupdata.com:57445
19/08/30 13:21:29 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
19/08/30 13:21:29 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, bdas4.hadoop0.cupdata.com, 57445, None)
19/08/30 13:21:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager bdas4.hadoop0.cupdata.com:57445 with 366.3 MB RAM, BlockManagerId(driver, bdas4.hadoop0.cupdata.com, 57445, None)
19/08/30 13:21:29 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, bdas4.hadoop0.cupdata.com, 57445, None)
19/08/30 13:21:29 INFO storage.BlockManager: external shuffle service port = 7337
19/08/30 13:21:29 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, bdas4.hadoop0.cupdata.com, 57445, None)
19/08/30 13:21:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@55917f4d{/metrics/json,null,AVAILABLE,@Spark}
19/08/30 13:21:30 INFO scheduler.EventLoggingListener: Logging events to hdfs://nameservice1/user/spark/spark2ApplicationHistory/application_1565250810361_0926_1
19/08/30 13:21:30 INFO util.Utils: Using initial executors = 1, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
19/08/30 13:21:30 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
19/08/30 13:21:30 INFO spark.SparkContext: Registered listener com.cloudera.spark.lineage.NavigatorAppListener
19/08/30 13:21:30 INFO yarn.ApplicationMaster:
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>{{HADOOP_COMMON_HOME}}/../../../SPARK2-2.3.0.cloudera2-1.cdh5.13.3.p0.316101/lib/spark2/jars/*<CPS>$HADOOP_CLIENT_CONF_DIR<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/*<CPS>$HADOOP_COMMON_HOME/lib/*<CPS>$HADOOP_HDFS_HOME/*<CPS>$HADOOP_HDFS_HOME/lib/*<CPS>$HADOOP_YARN_HOME/*<CPS>$HADOOP_YARN_HOME/lib/*<CPS>$HADOOP_MAPRED_HOME/*<CPS>$HADOOP_MAPRED_HOME/lib/*<CPS>$MR2_CLASSPATH<CPS>{{HADOOP_COMMON_HOME}}/../../../SPARK2-2.3.0.cloudera2-1.cdh5.13.3.p0.316101/lib/spark2/kafka-0.9/*:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/activation-1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/aopalliance-1.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/apacheds-i18n-2.0.0-M15.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/apacheds-kerberos-codec-2.0.0-M15.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/api-asn1-api-1.0.0-M20.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/api-util-1.0.0-M20.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/asm-3.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/avro-1.7.6-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/aws-java-sdk-bundle-1.11.134.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/azure-data-lake-store-sdk-2.2.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-beanutils-1.9.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-beanutils-core-1.8.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-codec-1.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-configuration-1.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-daemon-1.0.13.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-digester-1.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-el-1.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-math3-3.1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-net-3.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/curator-client-2.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/curator-framework-2.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/curator-recipes-2.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/guava-11.0.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/guice-3.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-annotations-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-ant-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-archive-logs-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-archives-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-auth-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-aws-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-azure-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-azure-datalake-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-common-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-datajoin-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-distcp-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-extras-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-gridmix-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-hdfs-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-hdfs-nfs-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-app-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-common-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-core-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-hs-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-nativetask-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-shuffle-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-examples-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-nfs-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-openstack-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-rumen-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-sls-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-streaming-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-api-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-client-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-common-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-registry-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-server-common-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-server-nodemanager-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-server-web-proxy-2.6.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hamcrest-core-1.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/htrace-core4-4.0.1-incubating.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/httpclient-4.2.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/httpcore-4.2.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hue-plugins-3.9.0-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jackson-annotations-2.2.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jackson-core-2.2.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jackson-core-asl-1.8.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jackson-databind-2.2.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jackson-mapper-asl-1.8.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jasper-compiler-5.5.23.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jasper-runtime-5.5.23.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/java-xmlbuilder-0.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/javax.inject-1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jaxb-api-2.2.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jaxb-impl-2.2.3-1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jets3t-0.9.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jettison-1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jline-2.11.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jsch-0.1.42.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jsp-api-2.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jsr305-3.0.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/leveldbjni-all-1.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/log4j-1.2.17.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/metrics-core-3.0.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/microsoft-windowsazure-storage-sdk-0.6.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/mockito-all-1.8.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/netty-3.10.5.Final.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/okhttp-2.4.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/okio-1.4.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/paranamer-2.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/protobuf-java-2.5.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/slf4j-api-1.7.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/slf4j-log4j12-1.7.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/snappy-java-1.0.4.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/spark-1.6.0-cdh5.14.2-yarn-shuffle.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/stax-api-1.0-2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/xercesImpl-2.9.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/xml-apis-1.3.04.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/xmlenc-0.52.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/jars/zookeeper-3.4.5-cdh5.14.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/LICENSE.txt:{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/NOTICE.txt<CPS>{{PWD}}/__spark_conf__/__hadoop_conf__
SPARK_DIST_CLASSPATH -> /opt/cloudera/parcels/SPARK2-2.3.0.cloudera2-1.cdh5.13.3.p0.316101/lib/spark2/kafka-0.9/*:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/avro-1.7.6-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/aws-java-sdk-bundle-1.11.134.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/azure-data-lake-store-sdk-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-beanutils-1.9.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-daemon-1.0.13.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-annotations-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-ant-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-archive-logs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-archives-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-auth-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-aws-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-azure-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-azure-datalake-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-datajoin-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-distcp-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-extras-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-gridmix-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-hdfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-hdfs-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-app-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-core-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-hs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-nativetask-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-client-shuffle-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-mapreduce-examples-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-openstack-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-rumen-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-sls-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-streaming-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-api-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-client-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-registry-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-server-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-server-nodemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-yarn-server-web-proxy-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hue-plugins-3.9.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jackson-annotations-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jackson-core-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jackson-databind-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jline-2.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/metrics-core-3.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/microsoft-windowsazure-storage-sdk-0.6.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/okhttp-2.4.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/okio-1.4.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/slf4j-log4j12-1.7.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/spark-1.6.0-cdh5.14.2-yarn-shuffle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/xercesImpl-2.9.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/xml-apis-1.3.04.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/zookeeper-3.4.5-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/LICENSE.txt:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/NOTICE.txt
SPARK_YARN_STAGING_DIR -> *********(redacted)
SPARK_USER -> *********(redacted)

command:
LD_LIBRARY_PATH="{{HADOOP_COMMON_HOME}}/../../../CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/native:$LD_LIBRARY_PATH" \
{{JAVA_HOME}}/bin/java \
-server \
-Xmx1024m \
-Djava.io.tmpdir={{PWD}}/tmp \
'-Dspark.authenticate=false' \
'-Dspark.network.crypto.enabled=false' \
'-Dspark.shuffle.service.port=7337' \
-Dspark.yarn.app.container.log.dir=<LOG_DIR> \
-XX:OnOutOfMemoryError='kill %p' \
org.apache.spark.executor.CoarseGrainedExecutorBackend \
--driver-url \
spark://CoarseGrainedScheduler@bdas4.hadoop0.cupdata.com:56002 \
--executor-id \
<executorId> \
--hostname \
<hostname> \
--cores \
1 \
--app-id \
application_1565250810361_0926 \
--user-class-path \
file:$PWD/__app__.jar \
1><LOG_DIR>/stdout \
2><LOG_DIR>/stderr

resources:
__app__.jar -> resource { scheme: "hdfs" host: "nameservice1" port: -1 file: "/user/manager/pcliu/SparkETL-2.11.8-2.3.0-assembly-0.1.jar" } size: 17294521 timestamp: 1567142467454 type: FILE visibility: PUBLIC
__spark_conf__ -> resource { scheme: "hdfs" host: "nameservice1" port: -1 file: "/user/hdfs/.sparkStaging/application_1565250810361_0926/__spark_conf__.zip" } size: 907830 timestamp: 1567142485158 type: ARCHIVE visibility: PRIVATE

===============================================================================
19/08/30 13:21:30 INFO client.RMProxy: Connecting to ResourceManager at bdas1.hadoop0.cupdata.com/10.192.247.221:8030
19/08/30 13:21:30 INFO yarn.YarnRMClient: Registering the ApplicationMaster
19/08/30 13:21:30 INFO util.Utils: Using initial executors = 1, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
19/08/30 13:21:30 INFO yarn.YarnAllocator: Will request 1 executor container(s), each with 1 core(s) and 1408 MB memory (including 384 MB of overhead)
19/08/30 13:21:30 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark://YarnAM@bdas4.hadoop0.cupdata.com:56002)
19/08/30 13:21:30 INFO yarn.YarnAllocator: Submitted 1 unlocalized container requests.
19/08/30 13:21:30 INFO yarn.ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
19/08/30 13:21:32 INFO yarn.YarnAllocator: Launching container container_1565250810361_0926_01_000002 on host bdas6.hadoop0.cupdata.com for executor with ID 1
19/08/30 13:21:32 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
19/08/30 13:21:34 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.192.247.226:56132) with ID 1
19/08/30 13:21:34 INFO spark.ExecutorAllocationManager: New executor 1 has registered (new total is 1)
19/08/30 13:21:34 INFO cluster.YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
19/08/30 13:21:34 INFO cluster.YarnClusterScheduler: YarnClusterScheduler.postStartHook done
19/08/30 13:21:34 INFO MySparkJob: Started MySparkJob--Arg0[A]-Arg1[B]
19/08/30 13:21:34 INFO storage.BlockManagerMasterEndpoint: Registering block manager bdas6.hadoop0.cupdata.com:55762 with 366.3 MB RAM, BlockManagerId(1, bdas6.hadoop0.cupdata.com, 55762, None)
19/08/30 13:21:34 INFO server.AbstractConnector: Stopped Spark@2129fdc5{HTTP/1.1,[http/1.1]}{0.0.0.0:0}
19/08/30 13:21:34 INFO ui.SparkUI: Stopped Spark web UI at http://bdas4.hadoop0.cupdata.com:46273
19/08/30 13:21:34 INFO yarn.YarnAllocator: Driver requested a total number of 0 executor(s).
19/08/30 13:21:34 INFO cluster.YarnClusterSchedulerBackend: Shutting down all executors
19/08/30 13:21:34 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
19/08/30 13:21:34 INFO cluster.SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
19/08/30 13:21:34 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
19/08/30 13:21:34 INFO memory.MemoryStore: MemoryStore cleared
19/08/30 13:21:34 INFO storage.BlockManager: BlockManager stopped
19/08/30 13:21:34 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
19/08/30 13:21:34 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
19/08/30 13:21:34 INFO spark.SparkContext: Successfully stopped SparkContext
19/08/30 13:21:34 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
19/08/30 13:21:34 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
19/08/30 13:21:34 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
19/08/30 13:21:34 INFO yarn.ApplicationMaster: Deleting staging directory hdfs://nameservice1/user/hdfs/.sparkStaging/application_1565250810361_0926
19/08/30 13:21:34 INFO util.ShutdownHookManager: Shutdown hook called
19/08/30 13:21:34 INFO util.ShutdownHookManager: Deleting directory /hadoop/yarn/nm/usercache/hdfs/appcache/application_1565250810361_0926/spark-2b7d66fa-d543-4aa2-9abb-a9b663136a9e

附加说明

权限问题,如测试程序是以manager用户提交的作业,而子作业却能以 hdfs 用户去执行,绕过了用户权限管理控制