hadoop bigdata questions and answers paper 754 - skillgunNote: Paper virtual numbers may be different from actual paper numbers . In the page numbers section website displaying virtual numbers .
A file in an HDFS is split into several blocks and those blocks are stored in a set of
Both a ,b
Among the following which node will take care of read and write operation with the file system?
Which operating system we need to install for setting up Hadoop environment?
If we have other than Linux operating system what software should be installed?
What is the default block size?
Among the following which node manages the Hadoop Distributed File system?
Among the following identify the node where data is present in advance before any processing takes palce?
Identify the node where Job Tracker runs and which accepts job requests from clients?
Node where Map and Reduce program runs?
Among the following which schedules jobs and tracks the assign jobs to task Tracker.
Which is the particular instance of an attempt to execute a task?
Among the following __________ is an execution of a Mapper and Reducer across dataset?
___________ is an execution of a mapper or a Reducer on a slice of Data?
If a file in HDFS is smaller than the singleblock size then
Occupies the entire block size
File cannot be stored
can span over multiple blocks
occupies the only the size it needed not the full block
In the local disk of Name node the files which are stored persistently are
Name space image,edit log and block locations
Name space image and edit log
edit log and block locations
Which of the following requires highest bandwidth for data transfer between nodes?
Nodes on the same rack in the same data center.
Nodes in different data centers.
Different nodes in the same rack.
Data on the same node.
The inter process communication between different nodes in hadoop uses
Which technology is used to store data in Hadoop?
Identify the mechanisms how Hadoop uses to make namenode resilent to failure
Take backup of filesystem metadata to a local disk and remote NFS.
Store in different CPUs.
Store the filesystem in cloud.
None of the above
Check point node in Hadoop cluster is used to
Merges the fsimage and edit log and uploads it back to active namenode.
Check which data nodes are not reachable.
Check if the name node is active.
Back To Top