如果我的代码得到org.apache.kafka.clients.consumer.OffsetOutOfRangeException. 我试过这张支票if(e.getCause().getCause() instanceof OffsetOutOfRangeException)但我仍然收到 SparkException,而不是 OffsetOutOfRangeException。ERROR Driver:86 - Error in executing streamorg.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3.0 (TID 11, localhost, executor 0): org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions: {dns_data-0=23245772} at org.apache.kafka.clients.consumer.internals.Fetcher.parseFetchedData(Fetcher.java:588) at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:354) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1000) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:938) at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.poll(CachedKafkaConsumer.scala:136) at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.get(CachedKafkaConsumer.scala:68) at org.apache.spark.streaming.kafka010.KafkaRDDIterator.next(KafkaRDD.scala:271) at org.apache.spark.streaming.kafka010.KafkaRDDIterator.next(KafkaRDD.scala:231) at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) at scala.collection.Iterator$$anon$10.next(Iterator.scala:393) at scala.collection.Iterator$class.foreach(Iterator.scala:893)`Caused by: org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions: {dns_data-0=23245772}
添加回答
举报
0/150
提交
取消