开发者社区> 问答> 正文

spark2.1.0的kafka链接失败了:报错

in thread "main" java.lang.IllegalArgumentException: 'path' is not specified
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$9.apply(DataSource.scala:205)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$9.apply(DataSource.scala:205)
    at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
    at org.apache.spark.sql.catalyst.util.CaseInsensitiveMap.getOrElse(CaseInsensitiveMap.scala:23)
    at org.apache.spark.sql.execution.datasources.DataSource.sourceSchema(DataSource.scala:204)
    at org.apache.spark.sql.execution.datasources.DataSource.sourceInfo$lzycompute(DataSource.scala:87)
    at org.apache.spark.sql.execution.datasources.DataSource.sourceInfo(DataSource.scala:87)
    at org.apache.spark.sql.execution.streaming.StreamingRelation$.apply(StreamingRelation.scala:30)
    at org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:124)
    at org.apache.spark.examp.JavaKafkaWordCountDataRow.main(JavaKafkaWordCountDataRow.java:63)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/07/03 11:24:03 INFO spark.SparkContext: Invoking stop() from shutdown hook
17/07/03 11:24:03 INFO server.ServerConnector: Stopped ServerConnector@1eef9aef{HTTP/1.1}{0.0.0.0:4040}
17/07/03 11:24:03 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@37eeec90{/stages/stage/kill,null,UNAVAILABLE}
 

展开
收起
kun坤 2020-06-14 13:44:30 516 0
1 条回答
写回答
取消 提交回答
  • Dataset ds1 = spark .readStream() .option("kafka.bootstrap.servers", "192.168.28.101:2181,192.168.28.102:2181,192.168.28.103:2181") .option("subscribe", "3719e24b66abea87") .load(); ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)");

            System.out.println("---ds1--"+ ds1);
    
    2021-02-21 00:27:33
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
Java Spring Boot开发实战系列课程【第16讲】:Spring Boot 2.0 实战Apache Kafka百万级高并发消息中间件与原理解析 立即下载
MaxCompute技术公开课第四季 之 如何将Kafka数据同步至MaxCompute 立即下载
消息队列kafka介绍 立即下载