我们专注攀枝花网站设计 攀枝花网站制作 攀枝花网站建设
成都网站建设公司服务热线:400-028-6601

网站建设知识

十年网站开发经验 + 多家企业客户 + 靠谱的建站团队

量身定制 + 运营维护+专业推广+无忧售后,网站问题一站解决

如何理解Receiver启动以及启动源码分析

今天就跟大家聊聊有关如何理解Receiver启动以及启动源码分析,可能很多人都不太了解,为了让大家更加了解,小编给大家总结了以下内容,希望大家根据这篇文章可以有所收获。

创新互联是一家集网站建设,博乐企业网站建设,博乐品牌网站建设,网站定制,博乐网站建设报价,网络营销,网络优化,博乐网站推广为一体的创新建站企业,帮助传统企业提升企业形象加强企业竞争力。可充分满足这一群体相比中小企业更为丰富、高端、多元的互联网需求。同时我们时刻保持专业、时尚、前沿,时刻以成就客户成长自我,坚持不断学习、思考、沉淀、净化自己,让我们为更多的企业打造出实用型网站。

为什么要Receiver?

Receiver不断持续接收外部数据源的数据,并把数据汇报给Driver端,这样我们每隔BatchDuration会把汇报数据生成不同的Job,来执行RDD的操作。

Receiver是随着应用程序的启动而启动的。

Receiver和InputDStream是一一对应的。

RDD[Receiver]只有一个Partition,一个Receiver实例。

Spark Core并不知道RDD[Receiver]的特殊性,依然按照普通RDD对应的Job进行调度,就有可能在同样一个Executor上启动多个Receiver,会导致负载不均衡,会导致Receiver启动失败。

Receiver在Executor启动的方案:

1,启动不同Receiver采用RDD中不同Partiton的方式,不同的Partiton代表不同的Receiver,在执行层面就是不同的Task,在每个Task启动时就启动Receiver。

这种方式实现简单巧妙,但是存在弊端启动可能失败,运行过程中Receiver失败,会导致TaskRetry,如果3次失败就会导致Job失败,会导致整个Spark应用程序失败。因为Receiver的故障,导致Job失败,不能容错。

2.第二种方式就是Spark Streaming采用的方式。

在ReceiverTacker的start方法中,先实例化Rpc消息通信体ReceiverTrackerEndpoint,再调用

launchReceivers方法。

/** Start the endpoint and receiver execution thread. */
defstart(): Unit = synchronized {
  if(isTrackerStarted) {
    throw newSparkException("ReceiverTracker already started")
  }

  if(!receiverInputStreams.isEmpty) {
    endpoint = ssc.env.rpcEnv.setupEndpoint(
      "ReceiverTracker", newReceiverTrackerEndpoint(ssc.env.rpcEnv))
    if(!skipReceiverLaunch) launchReceivers()
    logInfo("ReceiverTracker started")
    trackerState = Started
  }
}

在launchReceivers方法中,先对每一个ReceiverInputStream获取到对应的一个Receiver,然后发送StartAllReceivers消息。Receiver对应一个数据来源。

/**
 * Get the receivers from the ReceiverInputDStreams, distributes them to the
 * worker nodes as a parallel collection, and runs them.
 */
private deflaunchReceivers(): Unit = {
  valreceivers = receiverInputStreams.map(nis => {
    valrcvr = nis.getReceiver()
    rcvr.setReceiverId(nis.id)
    rcvr
  })

  runDummySparkJob()

  logInfo("Starting "+ receivers.length + " receivers")
  endpoint.send(StartAllReceivers(receivers))
}

ReceiverTrackerEndpoint接收到StartAllReceivers消息后,先找到Receiver运行在哪些Executor上,然后调用startReceiver方法。

override defreceive: PartialFunction[Any, Unit] = {
  // Local messages
  caseStartAllReceivers(receivers) =>
    valscheduledLocations = schedulingPolicy.scheduleReceivers(receivers, getExecutors)
    for(receiver <- receivers) {
      valexecutors = scheduledLocations(receiver.streamId)
      updateReceiverScheduledExecutors(receiver.streamId, executors)
      receiverPreferredLocations(receiver.streamId) = receiver.preferredLocation
      startReceiver(receiver, executors)
    }

startReceiver方法在Driver层面自己指定了TaskLocation,而不用Spark Core来帮我们选择TaskLocation。其有以下特点:终止Receiver不需要重启Spark Job;第一次启动Receiver,不会执行第二次;为了启动Receiver而启动了一个Spark作业,一个Spark作业启动一个Receiver。每个Receiver启动触发一个Spark作业,而不是每个Receiver是在一个Spark作业的一个Task来启动。当提交启动Receiver的作业失败时发送RestartReceiver消息,来重启Receiver。

/**
 * Start a receiver along with its scheduled executors
 */
private defstartReceiver(
    receiver: Receiver[_],
    scheduledLocations: Seq[TaskLocation]): Unit = {
  defshouldStartReceiver: Boolean = {
    // It's okay to start when trackerState is Initialized or Started
    !(isTrackerStopping || isTrackerStopped)
  }

  valreceiverId = receiver.streamId
  if(!shouldStartReceiver) {
    onReceiverJobFinish(receiverId)
    return
  }

  valcheckpointDirOption = Option(ssc.checkpointDir)
  valserializableHadoopConf =
    newSerializableConfiguration(ssc.sparkContext.hadoopConfiguration)

  // Function to start the receiver on the worker node
  valstartReceiverFunc: Iterator[Receiver[_]] => Unit =
    (iterator: Iterator[Receiver[_]]) => {
      if(!iterator.hasNext) {
        throw newSparkException(
          "Could not start receiver as object not found.")
      }
      if(TaskContext.get().attemptNumber() == 0) {
        valreceiver = iterator.next()
        assert(iterator.hasNext == false)
        valsupervisor = newReceiverSupervisorImpl(
          receiver, SparkEnv.get, serializableHadoopConf.value, checkpointDirOption)
        supervisor.start()
        supervisor.awaitTermination()
      } else{
        // It's restarted by TaskScheduler, but we want to reschedule it again. So exit it.
      }
    }

  // Create the RDD using the scheduledLocations to run the receiver in a Spark job
  valreceiverRDD: RDD[Receiver[_]] =
    if(scheduledLocations.isEmpty) {
      ssc.sc.makeRDD(Seq(receiver), 1)
    } else{
      valpreferredLocations = scheduledLocations.map(_.toString).distinct
      ssc.sc.makeRDD(Seq(receiver -> preferredLocations))
    }
  receiverRDD.setName(s"Receiver$receiverId")
  ssc.sparkContext.setJobDescription(s"Streaming job running receiver$receiverId")
  ssc.sparkContext.setCallSite(Option(ssc.getStartSite()).getOrElse(Utils.getCallSite()))

  valfuture = ssc.sparkContext.submitJob[Receiver[_], Unit, Unit](
    receiverRDD, startReceiverFunc, Seq(0), (_, _) => Unit, ())
  // We will keep restarting the receiver job until ReceiverTracker is stopped
  future.onComplete {
    caseSuccess(_) =>
      if(!shouldStartReceiver) {
        onReceiverJobFinish(receiverId)
      } else{
        logInfo(s"Restarting Receiver$receiverId")
        self.send(RestartReceiver(receiver))
      }
    caseFailure(e) =>
      if(!shouldStartReceiver) {
        onReceiverJobFinish(receiverId)
      } else{
        logError("Receiver has been stopped. Try to restart it.", e)
        logInfo(s"Restarting Receiver$receiverId")
        self.send(RestartReceiver(receiver))
      }
  }(submitJobThreadPool)
  logInfo(s"Receiver${receiver.streamId} started")
}

看完上述内容,你们对如何理解Receiver启动以及启动源码分析有进一步的了解吗?如果还想了解更多知识或者相关内容,请关注创新互联行业资讯频道,感谢大家的支持。


当前标题:如何理解Receiver启动以及启动源码分析
链接地址:http://shouzuofang.com/article/geojhi.html

其他资讯