QBoard » Big Data » Big Data - Spark » Spark Driver in Apache spark

Spark Driver in Apache spark

  • I already have a cluster of 3 machines (ubuntu1,ubuntu2,ubuntu3 by VM virtualbox) running Hadoop 1.0.0. I installed spark on each of these machines. ub1 is my master node and the other nodes are working as slave. My question is what exactly a spark driver is? and should we set a IP and port to spark driver by spark.driver.host and where it will be executed and located? (master or slave)
      December 17, 2021 11:56 AM IST
    0
  • A Spark driver is the process that creates and owns an instance of SparkContext. It is your Spark application that launches the main method in which the instance of SparkContext is created. It is the cockpit of jobs and tasks execution (using DAGScheduler and Task Scheduler). It hosts Web UI for the environment

    It splits a Spark application into tasks and schedules them to run on executors. A driver is where the task scheduler lives and spawns tasks across workers. A driver coordinates workers and overall execution of tasks.

      December 22, 2021 1:28 PM IST
    0
  • You question is related to spark deploy on yarn, see 1http://spark.apache.org/docs/latest/running-on-yarn.html "Running Spark on YARN"

    Assume you start from a spark-submit --master yarn cmd :

    1. The cmd will request yarn Resource Manager (RM) to start a ApplicationMaster (AM)process on one of your cluster machines (those have yarn node manager installled on it).
    2. Once the AM started, it will call your driver program's main method. So the driver is actually where you define your spark context, your rdd, and your jobs. The driver contains the entry main method which start the spark computation.
    3. The spark context will prepare RPC endpoint for the executor to talk back, and a lot of other things(memory store, disk block manager, jetty server...)
    4. The AM will request RM for containers to run your spark executors, with the driver RPC url (something like spark://CoarseGrainedScheduler@ip:37444) specified on the executor's start cmd.

    The Yellow box "Spark context" is the Driver. Yarn cluster mode

      December 24, 2021 1:21 PM IST
    0
  • The spark driver is the program that declares the transformations and actions on RDDs of data and submits such requests to the master.

    In practical terms, the driver is the program that creates the SparkContext, connecting to a given Spark Master. In the case of a local cluster, like is your case, the master_url=spark://<host>:<port>

    Its location is independent of the master/slaves. You could co-located with the master or run it from another node. The only requirement is that it must be in a network addressable from the Spark Workers.

    This is how the configuration of your driver looks like:

    val conf = new SparkConf()
          .setMaster("master_url") // this is where the master is specified
          .setAppName("SparkExamplesMinimal")
          .set("spark.local.ip","xx.xx.xx.xx") // helps when multiple network interfaces are present. The driver must be in the same network as the master and slaves
          .set("spark.driver.host","xx.xx.xx.xx") // same as above. This duality might disappear in a future version
    
    val sc = new spark.SparkContext(conf)
        // etc...

     

    To explain a bit more on the different roles:

    • The driver prepares the context and declares the operations on the data using RDD transformations and actions.
    • The driver submits the serialized RDD graph to the master. The master creates tasks out of it and submits them to the workers for execution. It coordinates the different job stages.
    • The workers is where the tasks are actually executed. They should have the resources and network connectivity required to execute the operations requested on the RDDs.
      December 20, 2021 12:15 PM IST
    0