Simulation of CCDAK exam guide materials and exam prep for Confluent certification for examinee, Real Success Guaranteed with Updated CCDAK pdf dumps vce Materials. 100% PASS Confluent Certified Developer for Apache Kafka Certification Examination exam Today!
Confluent CCDAK Free Dumps Questions Online, Read and Test Now.
NEW QUESTION 1
How can you gracefully make a Kafka consumer to stop immediately polling data from Kafka and gracefully shut down a consumer application?
Answer: A
Explanation:
See https://stackoverflow.com/a/37748336/3019499
NEW QUESTION 2
An ecommerce website maintains two topics - a high volume "purchase" topic with 5 partitions and low volume "customer" topic with 3 partitions. You would like to do a stream- table join of these topics. How should you proceed?
Answer: C
Explanation:
In case of KStream-KStream join, both need to be co-partitioned. This restriction is not applicable in case of join with GlobalKTable, which is the most efficient here.
NEW QUESTION 3
Which actions will trigger partition rebalance for a consumer group? (select three)
Answer: ACD
Explanation:
Rebalance occurs when a new consumer is added, removed or consumer dies or paritions increased.
NEW QUESTION 4
You have a consumer group of 12 consumers and when a consumer gets killed by the process management system, rather abruptly, it does not trigger a graceful shutdown of your consumer. Therefore, it takes up to 10 seconds for a rebalance to happen. The business would like to have a 3 seconds rebalance time. What should you do? (select two)
Answer: BE
Explanation:
session.timeout.ms must be decreased to 3 seconds to allow for a faster rebalance, and the heartbeat thread must be quicker, so we also need to decrease heartbeat.interval.ms
NEW QUESTION 5
What is the disadvantage of request/response communication?
Answer: C
Explanation:
Point-to-point (request-response) style will couple client to the server.
NEW QUESTION 6
Which of the following statements are true regarding the number of partitions of a topic?
Answer: C
Explanation:
We can only add partitions to an existing topic, and it must be done using the kafka- topics.sh command
NEW QUESTION 7
What's is true about Kafka brokers and clients from version 0.10.2 onwards?
Answer: C
Explanation:
Kafka's new bidirectional client compatibility introduced in 0.10.2 allows this. Read more herehttps://www.confluent.io/blog/upgrading-apache-kafka-clients-just-got-easier/
NEW QUESTION 8
A Zookeeper configuration has tickTime of 2000, initLimit of 20 and syncLimit of 5. What's the timeout value for followers to connect to Zookeeper?
Answer: D
Explanation:
tick time is 2000 ms, and initLimit is the config taken into account when establishing a connection to Zookeeper, so the answer is 2000 * 20 = 40000 ms = 40s
NEW QUESTION 9
A Zookeeper ensemble contains 5 servers. What is the maximum number of servers that can go missing and the ensemble still run?
Answer: C
Explanation:
majority consists of 3 zk nodes for 5 nodes zk cluster, so 2 can fail
NEW QUESTION 10
A consumer starts and has auto.offset.reset=none, and the topic partition currently has data for offsets going from 45 to 2311. The consumer group has committed the offset 10 for the topic before. Where will the consumer read from?
Answer: C
Explanation:
auto.offset.reset=none means that the consumer will crash if the offsets it's recovering from have been deleted from Kafka, which is the case here, as 10 < 45
NEW QUESTION 11
What isn't a feature of the Confluent schema registry?
Answer: A
Explanation:
Data is stored on brokers.
NEW QUESTION 12
A producer just sent a message to the leader broker for a topic partition. The producer used acks=1 and therefore the data has not yet been replicated to followers. Under which conditions will the consumer see the message?
Answer: D
Explanation:
The high watermark is an advanced Kafka concept, and is advanced once all the ISR replicates the latest offsets. A consumer can only read up to the value of the High Watermark (which can be less than the highest offset, in the case of acks=1)
NEW QUESTION 13
You have a Kafka cluster and all the topics have a replication factor of 3. One intern at your company stopped a broker, and accidentally deleted all the data of that broker on the disk. What will happen if the broker is restarted?
Answer: B
Explanation:
Kafka replication mechanism makes it resilient to the scenarios where the broker lose data on disk, but can recover from replicating from other brokers. This makes Kafka amazing!
NEW QUESTION 14
What is the protocol used by Kafka clients to securely connect to the Confluent REST Proxy?
Answer: C
Explanation:
TLS - but it is still called SSL.
NEW QUESTION 15
Which of the following is not an Avro primitive type?
Answer: D
Explanation:
date is a logical type
NEW QUESTION 16
We want the average of all events in every five-minute window updated every minute. What kind of Kafka Streams window will be required on the stream?
Answer: D
Explanation:
A hopping window is defined by two propertiesthe window's size and its advance interval (aka "hop"), e.g., a hopping window with a size 5 minutes and an advance interval of 1 minute.
NEW QUESTION 17
I am producing Avro data on my Kafka cluster that is integrated with the Confluent Schema Registry. After a schema change that is incompatible, I know my data will be rejected. Which component will reject the data?
Answer: A
Explanation:
The Confluent Schema Registry is your safeguard against incompatible schema changes and will be the component that ensures no breaking schema evolution will be possible. Kafka Brokers do not look at your payload and your payload schema, and therefore will not reject data
NEW QUESTION 18
......
Recommend!! Get the Full CCDAK dumps in VCE and PDF From Dumps-hub.com, Welcome to Download: https://www.dumps-hub.com/CCDAK-dumps.html (New 150 Q&As Version)