Thursday, 5 March 2015

Apache Mesos v.s YARN

Mesos was built to be a global resource manager for your entire data center. 

  • Cluster manager
  • Two level scheduler called resource offers.
  • Supports long running service and analytical jobs

YARN was created as a necessity to move the Hadoop MapReduce API to the next iteration and life cycle. It had to remove the resource management out of that embedded framework and into its own container management life cycle model. 

  • Resource negotiator
  • Single level scheduler
  • Supports different types of analytics/data processing jobs


The primary difference between Mesos and Yarn is going to be its scheduler. 
In Mesos, when a job comes in, a job request comes into the Mesos master, and what Mesos does is it determines what the resources are that are available, and it makes offers back. Those offers can be accepted or rejected.
This allows the framework to decide what the best fit is for the job that needs to be run. Now, if it accepts the job for the resources, it places the job on the slave and all is happy. It has the option to reject the offer and wait for another offer to come in. One of the nice things about this model is it is very scalable.
In Yarn,a job request comes into the YARN resource manager, and YARN evaluates all the resources available and it places the job. It's the one making the decision where jobs should go; thus it is modeled as a monolithic scheduler. So from a scaling perspective, Mesos has better scaling capabilities. Yarn was created to be a resource manager for Hadoop jobs. YARN has tried to grow out of that and grow more into the space that Mesos is occupying so well.
Conclusion
To manage your data center as a whole, you've got Mesos on one side that can manage every single resource in your data center, and on the other you have YARN which can safely manage these Hadoop jobs. YARN is not capable of managing your entire data center.

Reference:
https://www.mapr.com/blog/apache-mesos-vs-hadoop-yarn-whiteboard-walkthrough#.VPhudoHF87M

No comments:

Post a Comment