Friday 28 November 2014

Interview Questions 2 -- Hadoop

What is Hadoop Streaming?  
 
Streaming is a generic API that allows programs written in virtually any language to be used as Hadoop Mapper and Reducer implementations.

What is the characteristic of streaming API that makes it flexible run MapReduce jobs in languages like Perl, Ruby, Awk etc.?
 
Hadoop Streaming allows to use arbitrary programs for the Mapper and Reducer phases of a MapReduce job by having both Mappers and Reducers receive their input on stdin and emit output (key, value) pairs on stdout.

What is Distributed Cache in Hadoop?
 
Distributed Cache is a facility provided by the MapReduce framework to cache files (text, archives, jars and so on) needed by applications during execution of the job. The framework will copy the necessary files to the slave node before any tasks for the job are executed on that node.

What is the benefit of Distributed cache? Why can we just have the file in HDFS and have the application read it?
 
This is because distributed cache is much faster. It copies the file to all trackers at the start of the job. Now if the task tracker runs 10 or 100 Mappers or Reducer, it will use the same copy of distributed cache. On the other hand, if you put code in file to read it from HDFS in the MR Job then every Mapper will try to access it from HDFS hence if a TaskTracker run 100 map jobs then it will try to read this file 100 times from HDFS. Also HDFS is not very efficient when used like this.

What mechanism does Hadoop framework provide to synchronise changes made in Distribution Cache during runtime of the application? 

This is a tricky question. There is no such mechanism. Distributed Cache by design is read only during the time of Job execution.

Is it possible to provide multiple input to Hadoop?

Yes, the input format class provides methods to add multiple directories as input to a Hadoop job.

How will you write a custom partitioner for a Hadoop job?
 
 To have Hadoop use a custom partitioner you will have to do minimum the following three:
- Create a new class that extends Partitioner Class
- Override method getPartition
- In the wrapper that runs the Mapreduce, either
- Add the custom partitioner to the job programmatically using method set Partitioner Class or – add the custom partitioner to the job as a config file (if your wrapper reads from config file or oozie)

How did you debug your Hadoop code?  
 
 There can be several ways of doing this but most common ways are:-
- By using counters.
- The web interface provided by Hadoop framework.

No comments:

Post a Comment