1、我们以Socket数据来源为例,通过WordCount计算来跟踪Receiver的启动
代码如下:
objectNetworkWordCount { defmain(args:Array[String]) { if (args.length< 2) { System.err.println("Usage: NetworkWordCount<hostname> <port>") System.exit(1) } val sparkConf= newSparkConf().setAppName("NetworkWordCount").setMaster("local[2]") val ssc = newStreamingContext(sparkConf,Seconds(1)) val lines= ssc.socketTextStream(args(0), args(1).toInt,StorageLevel.MEMORY_AND_DISK_SER) val words= lines.flatMap(_.split("")) val wordCounts= words.map(x => (x,1)).reduceByKey(_ + _) wordCounts.print() ssc.start() ssc.awaitTermination() } }
2、ssc.socketTextStream调用socketStream方法,在socketStream方法中new SocketInputDStream实例,
SocketInputDStream继承自ReceiverInputDStream。SocketInputDStream实现了getReceiver方法,
在getReceiver方法中实例化了一个SocketReceiver,SocketReceiver继承自Receiver类。
在SocketReceiver中主要实现了onStart方法,在onStart方法中启动一个线程来调用receive方法,
在receiver方法中就是具体接收数据的逻辑代码,通过Socket来读取数据然后包装到Iterator中,从
的start方法。直接看scheduler.start()这行代码,调用了JobScheduler的start方法,
看到receiverTracker.start()代码调用了receiverTracker的start方法。接着看launchReceivers()方法。
代码如下:
private def launchReceivers(): Unit = { val receivers = receiverInputStreams.map(nis => { val rcvr = nis.getReceiver() rcvr.setReceiverId(nis.id) rcvr }) runDummySparkJob() logInfo("Starting " + receivers.length + " receivers") endpoint.send(StartAllReceivers(receivers)) }
3.1 首先看receiverInputStreams ,他在ReceiverTracker实例化的时候声明
private val receiverInputStreams = ssc.graph.getReceiverInputStreams()
看val rcvr = nis.getReceiver(),rcvr是Receiver的一个子类,就是我们上面看的SocketReceiver,这里返回的是receivers,因为receiver可能有多个。
3.2 runDummySparkJob()从字面上看就是运行一个样本的job来测试一下应用的启动情况,看一下代码,就是运行一个简单的job测试
private def runDummySparkJob(): Unit = { if (!ssc.sparkContext.isLocal) { ssc.sparkContext.makeRDD(1 to 50, 50).map(x => (x, 1)).reduceByKey(_ + _, 20).collect() } assert(getExecutors.nonEmpty) }
3.3 看最后一行代码endpoint.send(StartAllReceivers(receivers)),发送一条消息给ReceiverTrackerEndpoint, 而ReceiverTrackerEndpoint是在ReceiverTracker的start方法中被赋值的。
3.4 看ReceiverTrackerEndpoint中的消息接收方法,代码如下
case StartAllReceivers(receivers) => val scheduledLocations = schedulingPolicy.scheduleReceivers(receivers, getExecutors) for (receiver <- receivers) { val executors = scheduledLocations(receiver.streamId) updateReceiverScheduledExecutors(receiver.streamId, executors) receiverPreferredLocations(receiver.streamId) = receiver.preferredLocation startReceiver(receiver, executors) } val scheduledLocations = schedulingPolicy.scheduleReceivers(receivers, getExecutors)
这行代码的作用就是计算第一个receiver可以运行的Executor,接下来看关键性的一行代码
startReceiver(receiver, executors),代码如下:
private def startReceiver( receiver: Receiver[_], scheduledLocations: Seq[TaskLocation]): Unit = { def shouldStartReceiver: Boolean = { // It's okay to start when trackerState is Initialized or Started !(isTrackerStopping || isTrackerStopped) } val receiverId = receiver.streamId if (!shouldStartReceiver) { onReceiverJobFinish(receiverId) return } val checkpointDirOption = Option(ssc.checkpointDir) val serializableHadoopConf = new SerializableConfiguration(ssc.sparkContext.hadoopConfiguration) // Function to start the receiver on the worker node val startReceiverFunc: Iterator[Receiver[_]] => Unit = (iterator: Iterator[Receiver[_]]) => { if (!iterator.hasNext) { throw new SparkException("Could not start receiver as object not found.") } if (TaskContext.get().attemptNumber() == 0) { val receiver = iterator.next() assert(iterator.hasNext == false) val supervisor = new ReceiverSupervisorImpl(receiver, SparkEnv.get, serializableHadoopConf.value, checkpointDirOption) supervisor.start() supervisor.awaitTermination() } else { // It's restarted by TaskScheduler, but we want to reschedule it again. So exit it. } } // Create the RDD using the scheduledLocations to run the receiver in a Spark job val receiverRDD: RDD[Receiver[_]] = if (scheduledLocations.isEmpty) { ssc.sc.makeRDD(Seq(receiver), 1) } else { val preferredLocations = scheduledLocations.map(_.toString).distinct ssc.sc.makeRDD(Seq(receiver -> preferredLocations)) } receiverRDD.setName(s"Receiver $receiverId") ssc.sparkContext.setJobDescription(s"Streaming job running receiver $receiverId") ssc.sparkContext.setCallSite(Option(ssc.getStartSite()).getOrElse(Utils.getCallSite())) val future = ssc.sparkContext.submitJob[Receiver[_], Unit, Unit](receiverRDD, startReceiverFunc, Seq(0), (_, _) => Unit, ()) future.onComplete { case Success(_) => if (!shouldStartReceiver) { onReceiverJobFinish(receiverId) } else { logInfo(s"Restarting Receiver $receiverId") self.send(RestartReceiver(receiver)) } case Failure(e) => if (!shouldStartReceiver) { onReceiverJobFinish(receiverId) } else { logError("Receiver has been stopped. Try to restart it.", e) logInfo(s"Restarting Receiver $receiverId") self.send(RestartReceiver(receiver)) } }(submitJobThreadPool) logInfo(s"Receiver ${receiver.streamId} started") }
4、具体看一下startReceiver方法都做了什么
4.1 看startReceiverFunc函数的定义,startReceiverFunc就是job中action执行的函数,首先判断iterator中有数据,然后取第一条数据(就是Receiver),看到这样的写法,真的非常神奇,把Receiver包装成RDD的数据发送到Executor上运行。
val supervisor = new ReceiverSupervisorImpl(receiver, SparkEnv.get, serializableHadoopConf.value, checkpointDirOption) supervisor.start()
4.2 把receiver传入ReceiverSupervisorImpl中,调用ReceiverSupervisorImpl的start方法,然后调用startReceiver,在startReceiver中调用receiver的onStart()方法,这就是前面提到的启动数据接收的方法
4.3 定义好action的函数,再来看receiverRDD,通过ssc.sc.makeRDD(Seq(receiver), 1)或ssc.sc.makeRDD(Seq(receiver -> preferredLocations))生成RDD
4.4 最后执行submitJob将RDD[Receiver]提交到集群,需要注意一点,每一个receiver生成一个job,如果一个Receiver的job失败不会影响整个应用的执行,job失败后重新发送self.send(RestartReceiver(receiver))消息,会重新提交job,保证receiver的可靠性,这样的设计值得学习
注:以上内容如有错误,欢迎指正
作者:海纳百川_spark
链接:https://www.jianshu.com/p/077fc812a666
共同学习,写下你的评论
评论加载中...
作者其他优质文章