CCA-505 Exam Questions - Online Test
CCA-505 Premium VCE File
Learn More
100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours
Your success in Cloudera CCA-505 is our sole target and we develop all our CCA-505 braindumps in a way that facilitates the attainment of this target. Not only is our CCA-505 study material the best you can find, it is also the most detailed and the most updated. CCA-505 Practice Exams for Cloudera {category} CCA-505 are written to the highest standards of technical accuracy.
NEW QUESTION 1
You are configuring your cluster to run HDFS and MapReduce v2 (MRv2) on YARN. Which daemons need to be installed on your clusters master nodes? (Choose Two)
- A. ResourceManager
- B. DataNode
- C. NameNode
- D. JobTracker
- E. TaskTracker
- F. HMaster
Answer: AC
NEW QUESTION 2
You are planning a Hadoop cluster and considering implementing 10 Gigabit Ethernet as the network fabric. Which workloads benefit the most from a faster network fabric?
- A. When your workload generates a large amount of output data, significantly larger than amount of intermediate data
- B. When your workload generates a large amount of intermediate data, on the order of the input data itself
- C. When workload consumers a large amount of input data, relative to the entire capacity of HDFS
- D. When your workload consists of processor-intensive tasks
Answer: B
NEW QUESTION 3
Your Hadoop cluster contains nodes in three racks. You have NOT configured the dfs.hosts property in the NameNode’s configuration file. What results?
- A. No new nodes can be added to the cluster until you specify them in the dfs.hosts file
- B. Presented with a blank dfs.hosts property, the NameNode will permit DatNode specified in mapred.hosts to join the cluster
- C. Any machine running the DataNode daemon can immediately join the cluster
- D. The NameNode will update the dfs.hosts property to include machine running DataNode daemon on the next NameNode reboot or with the command dfsadmin -refreshNodes
Answer: C
NEW QUESTION 4
You are the hadoop fs –put command to add a file “sales.txt” to HDFS. This file is small enough that it fits into a single block, which is replicated to three nodes in your cluster (with a replication factor of 3). One of the nodes holding this file (a single block) fails. How will the cluster handle the replication of this file in this situation/
- A. The cluster will re-replicate the file the next time the system administrator reboots the NameNode daemon (as long as the file’s replication doesn’t fall two)
- B. This file will be immediately re-replicated and all other HDFS operations on the cluster will halt until the cluster’s replication values are restored
- C. The file will remain under-replicated until the administrator brings that nodes back online
- D. The file will be re-replicated automatically after the NameNode determines it is under replicated based on the block reports it receives from the DataNodes
Answer: B
NEW QUESTION 5
Your cluster’s mapped-site.xml includes the following parameters
<name>mapreduce.map.memory.mb</name>
<value>4096<value/>
<name>mapreduce.reduce.memory,mb</name>
<value>8192</value>
And your cluster’s yarn-site.xml includes the following parameters
<name>yarn.nodemanager/vmen-pmem-ratio</name>
<value>2.1</value>
What is the maximum amount of virtual memory allocated for each map before YARN will kill its Container?
- A. 4 GB
- B. 17.2 GB
- C. 24.6 GB
- D. 8.2 GB
Answer: D
Explanation:
ince map memory is 4gb and you are telling physical VS virtual is 2:1 one map can use only 8gb of virtual memory (swap) .. after that it would be killed.
NEW QUESTION 6
What processes must you do if you are running a Hadoop cluster with a single NameNode and six DataNodes, and you want to change a configuration parameter so that it affects all six DataNodes.
- A. You must modify the configuration file on each of the six DataNode machines.
- B. You must restart the NameNode daemon to apply the changes to the cluster
- C. You must restart all six DatNode daemon to apply the changes to the cluste
- D. You don’t need to restart any daemon, as they will pick up changes automatically
- E. You must modify the configuration files on the NameNode onl
- F. DataNodes read their configuration from the master nodes.
Answer: BE
NEW QUESTION 7
Given:
You want to clean up this list by removing jobs where the state is KILLED. What command you enter?
- A. Yarn application –kill application_1374638600275_0109
- B. Yarn rmadmin –refreshQueue
- C. Yarn application –refreshJobHistory
- D. Yarn rmadmin –kill application_1374638600275_0109
Answer: A
Explanation:
Reference: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1-latest/bk_using-apache-hadoop/content/common_mrv2_commands.html
NEW QUESTION 8
A user comes to you, complaining that when she attempts to submit a Hadoop job, it fails. There is a directory in HDFS named /data/input. The Jar is named j.jar, and the driver class is named DriverClass. She runs command:
hadoop jar j.jar DriverClass /data/input/data/output The error message returned includes the line:
PrivilegedActionException as:training (auth:SIMPLE) cause.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exits: file :/data/input
What is the cause of the error?
- A. The Hadoop configuration files on the client do not point to the cluster
- B. The directory name is misspelled in HDFS
- C. The name of the driver has been spelled incorrectly on the command line
- D. The output directory already exists
- E. The user is not authorized to run the job on the cluster
Answer: A
NEW QUESTION 9
You observe that the number of spilled records from Map tasks far exceeds the number of map output records. Your child heap size is 1GB and your io.sort.mb value is set to 100 MB. How would you tune your io.sort.mb value to achieve maximum memory to disk I/O ratio?
- A. Decrease the io.sort.mb value to 0
- B. Increase the io.sort.mb to 1GB
- C. For 1GB child heap size an io.sort.mb of 128 MB will always maximize memory to disk I/O
- D. Tune the io.sort.mb value until you observe that the number of spilled records equals (or is as close to equals) the number of map output records
Answer: D
NEW QUESTION 10
Which three basic configuration parameters must you set to migrate your cluster from MapReduce1 (MRv1) to MapReduce v2 (MRv2)?
- A. Configure the NodeManager hostname and enable services on YARN by setting the following property in yarn-site.xml:<name>yarn.nodemanager.hostname</name><value>your_nodeManager_hostname</value>
- B. Configure the number of map tasks per job on YARN by setting the following property in mapred-site.xml:<name>mapreduce.job.maps</name><value>2</value>
- C. Configure MapReduce as a framework running on YARN by setting the following property in mapred-site.xml:<name>mapreduce.framework.name</name><value>yarn</value>
- D. Configure the ResourceManager hostname and enable node services on YARN by setting the following property in yarn-site.xml:<name>yarn.resourcemanager.hostname</name><value>your_responseManager_hostname</value>
- E. Configure a default scheduler to run on YARN by setting the following property in sapred-site.xml:<name>mapreduce.jobtracker.taskScheduler</name><value>org.apache.hadoop.mapred.JobQueueTaskScheduler</value>
- F. Configure the NodeManager to enable MapReduce services on YARN by adding following property in yarn-site.xml:<name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value>
Answer: ABD
NEW QUESTION 11
You have a cluster running with the Fair Scheduler enabled. There are currently no jobs running on the cluster, and you submit a job A, so that only job A is running on the cluster. A while later, you submit Job B. now job A and Job B are running on the cluster at the same time. How will the Fair Scheduler handle these two jobs?
- A. When job A gets submitted, it consumes all the tasks slots.
- B. When job A gets submitted, it doesn’t consume all the task slots
- C. When job B gets submitted, Job A has to finish first, before job B can scheduled
- D. When job B gets submitted, it will get assigned tasks, while Job A continue to run with fewer tasks.
Answer: C
NEW QUESTION 12
Assume you have a file named foo.txt in your local directory. You issue the following three commands:
Hadoop fs –mkdir input
Hadoop fs –put foo.txt input/foo.txt
Hadoop fs –put foo.txt input
What happens when you issue that third command?
- A. The write succeeds, overwriting foo.txt in HDFS with no warning
- B. The write silently fails
- C. The file is uploaded and stored as a plain named input
- D. You get an error message telling you that input is not a directory
- E. You get a error message telling you that foo.txt already exist
- F. The file is not written to HDFS
- G. You get an error message telling you that foo.txt already exists, and asking you if you would like to overwrite
- H. You get a warning that foo.txt is being overwritten
Answer: E
NEW QUESTION 13
Which two are Features of Hadoop's rack topology?
- A. Configuration of rack awareness is accomplished using a configuration fil
- B. You cannot use a rack topology script.
- C. Even for small clusters on a single rack, configuring rack awareness will improve performance.
- D. Rack location is considered in the HDFS block placement policy
- E. HDFS is rack aware but MapReduce daemons are not
- F. Hadoop gives preference to Intra rack data transfer in order to conserve bandwidth
Answer: BC
NEW QUESTION 14
On a cluster running MapReduce v2 (MRv2) on YARN, a MapReduce job is given a directory of 10 plain text as its input directory. Each file is made up of 3 HDFS blocks. How many Mappers will run?
- A. We cannot say; the number of Mappers is determined by the RsourceManager
- B. We cannot say; the number of Mappers is determined by the ApplicationManager
- C. We cannot say; the number of Mappers is determined by the developer
- D. 30
- E. 3
- F. 10
Answer: E
NEW QUESTION 15
You suspect that your NameNode is incorrectly configured, and is swapping memory to disk. Which Linux commands help you to identify whether swapping is occurring? (Select 3)
- A. free
- B. df
- C. memcat
- D. top
- E. vmstat
- F. swapinfo
Answer: ADE
NEW QUESTION 16
Which YARN process runs as “controller O” of a submitted job and is responsible for resource requests?
- A. ResourceManager
- B. NodeManager
- C. JobHistoryServer
- D. ApplicationMaster
- E. JobTracker
- F. ApplicationManager
Answer: D
P.S. Easily pass CCA-505 Exam with 45 Q&As DumpSolutions Dumps & pdf Version, Welcome to Download the Newest DumpSolutions CCA-505 Dumps: https://www.dumpsolutions.com/CCA-505-dumps/ (45 New Questions)