aiotestking uk

CCDAK Exam Questions - Online Test


CCDAK Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

Simulation of CCDAK exam guide materials and exam prep for Confluent certification for examinee, Real Success Guaranteed with Updated CCDAK pdf dumps vce Materials. 100% PASS Confluent Certified Developer for Apache Kafka Certification Examination exam Today!

Confluent CCDAK Free Dumps Questions Online, Read and Test Now.

NEW QUESTION 1
How can you gracefully make a Kafka consumer to stop immediately polling data from Kafka and gracefully shut down a consumer application?

  • A. Call consumer.wakeUp() and catch a WakeUpException
  • B. Call consumer.poll() in another thread
  • C. Kill the consumer thread

Answer: A

Explanation:
See https://stackoverflow.com/a/37748336/3019499

NEW QUESTION 2
An ecommerce website maintains two topics - a high volume "purchase" topic with 5 partitions and low volume "customer" topic with 3 partitions. You would like to do a stream- table join of these topics. How should you proceed?

  • A. Repartition the purchase topic to have 3 partitions
  • B. Repartition customer topic to have 5 partitions
  • C. Model customer as a GlobalKTable
  • D. Do a KStream / KTable join after a repartition step

Answer: C

Explanation:
In case of KStream-KStream join, both need to be co-partitioned. This restriction is not applicable in case of join with GlobalKTable, which is the most efficient here.

NEW QUESTION 3
Which actions will trigger partition rebalance for a consumer group? (select three)

  • A. Increase partitions of a topic
  • B. Remove a broker from the cluster
  • C. Add a new consumer to consumer group
  • D. A consumer in a consumer group shuts down Add a broker to the cluster

Answer: ACD

Explanation:
Rebalance occurs when a new consumer is added, removed or consumer dies or paritions increased.

NEW QUESTION 4
You have a consumer group of 12 consumers and when a consumer gets killed by the process management system, rather abruptly, it does not trigger a graceful shutdown of your consumer. Therefore, it takes up to 10 seconds for a rebalance to happen. The business would like to have a 3 seconds rebalance time. What should you do? (select two)

  • A. Increase session.timeout.ms
  • B. Decrease session.timeout.ms
  • C. Increase heartbeat.interval.ms
  • D. decrease max.poll.interval.ms
  • E. increase max.poll.interval.ms
  • F. Decrease heartbeat.interval.ms

Answer: BE

Explanation:
session.timeout.ms must be decreased to 3 seconds to allow for a faster rebalance, and the heartbeat thread must be quicker, so we also need to decrease heartbeat.interval.ms

NEW QUESTION 5
What is the disadvantage of request/response communication?

  • A. Scalability
  • B. Reliability
  • C. Coupling
  • D. Cost

Answer: C

Explanation:
Point-to-point (request-response) style will couple client to the server.

NEW QUESTION 6
Which of the following statements are true regarding the number of partitions of a topic?

  • A. The number of partitions in a topic cannot be altered
  • B. We can add partitions in a topic by adding a broker to the cluster
  • C. We can add partitions in a topic using the kafka-topics.sh command
  • D. We can remove partitions in a topic by removing a broker
  • E. We can remove partitions in a topic using the kafka-topics.sh command

Answer: C

Explanation:
We can only add partitions to an existing topic, and it must be done using the kafka- topics.sh command

NEW QUESTION 7
What's is true about Kafka brokers and clients from version 0.10.2 onwards?

  • A. Clients and brokers must have the exact same version to be able to communicate
  • B. A newer client can talk to a newer broker, but an older client cannot talk to a newer broker
  • C. A newer client can talk to a newer broker, and an older client can talk to a newer broker
  • D. A newer client can't talk to a newer broker, but an older client can talk to a newer broker

Answer: C

Explanation:
Kafka's new bidirectional client compatibility introduced in 0.10.2 allows this. Read more herehttps://www.confluent.io/blog/upgrading-apache-kafka-clients-just-got-easier/

NEW QUESTION 8
A Zookeeper configuration has tickTime of 2000, initLimit of 20 and syncLimit of 5. What's the timeout value for followers to connect to Zookeeper?

  • A. 20 sec
  • B. 10 sec
  • C. 2000 ms
  • D. 40 sec

Answer: D

Explanation:
tick time is 2000 ms, and initLimit is the config taken into account when establishing a connection to Zookeeper, so the answer is 2000 * 20 = 40000 ms = 40s

NEW QUESTION 9
A Zookeeper ensemble contains 5 servers. What is the maximum number of servers that can go missing and the ensemble still run?

  • A. 3
  • B. 4
  • C. 2
  • D. 1

Answer: C

Explanation:
majority consists of 3 zk nodes for 5 nodes zk cluster, so 2 can fail

NEW QUESTION 10
A consumer starts and has auto.offset.reset=none, and the topic partition currently has data for offsets going from 45 to 2311. The consumer group has committed the offset 10 for the topic before. Where will the consumer read from?

  • A. offset 45
  • B. offset 10
  • C. it will crash
  • D. offset 2311

Answer: C

Explanation:
auto.offset.reset=none means that the consumer will crash if the offsets it's recovering from have been deleted from Kafka, which is the case here, as 10 < 45

NEW QUESTION 11
What isn't a feature of the Confluent schema registry?

  • A. Store avro data
  • B. Enforce compatibility rules
  • C. Store schemas

Answer: A

Explanation:
Data is stored on brokers.

NEW QUESTION 12
A producer just sent a message to the leader broker for a topic partition. The producer used acks=1 and therefore the data has not yet been replicated to followers. Under which conditions will the consumer see the message?

  • A. Right away
  • B. When the message has been fully replicated to all replicas
  • C. Never, the produce request will fail
  • D. When the high watermark has advanced

Answer: D

Explanation:
The high watermark is an advanced Kafka concept, and is advanced once all the ISR replicates the latest offsets. A consumer can only read up to the value of the High Watermark (which can be less than the highest offset, in the case of acks=1)

NEW QUESTION 13
You have a Kafka cluster and all the topics have a replication factor of 3. One intern at your company stopped a broker, and accidentally deleted all the data of that broker on the disk. What will happen if the broker is restarted?

  • A. The broker will start, and other topics will also be deleted as the broker data on the disk got deleted
  • B. The broker will start, and won't be online until all the data it needs to have is replicated from other leaders
  • C. The broker will crash
  • D. The broker will start, and won't have any dat
  • E. If the broker comes leader, we have a data loss

Answer: B

Explanation:
Kafka replication mechanism makes it resilient to the scenarios where the broker lose data on disk, but can recover from replicating from other brokers. This makes Kafka amazing!

NEW QUESTION 14
What is the protocol used by Kafka clients to securely connect to the Confluent REST Proxy?

  • A. Kerberos
  • B. SASL
  • C. HTTPS (SSL/TLS)
  • D. HTTP

Answer: C

Explanation:
TLS - but it is still called SSL.

NEW QUESTION 15
Which of the following is not an Avro primitive type?

  • A. string
  • B. long
  • C. int
  • D. date
  • E. null

Answer: D

Explanation:
date is a logical type

NEW QUESTION 16
We want the average of all events in every five-minute window updated every minute. What kind of Kafka Streams window will be required on the stream?

  • A. Session window
  • B. Tumbling window
  • C. Sliding window
  • D. Hopping window

Answer: D

Explanation:
A hopping window is defined by two propertiesthe window's size and its advance interval (aka "hop"), e.g., a hopping window with a size 5 minutes and an advance interval of 1 minute.

NEW QUESTION 17
I am producing Avro data on my Kafka cluster that is integrated with the Confluent Schema Registry. After a schema change that is incompatible, I know my data will be rejected. Which component will reject the data?

  • A. The Confluent Schema Registry
  • B. The Kafka Broker
  • C. The Kafka Producer itself
  • D. Zookeeper

Answer: A

Explanation:
The Confluent Schema Registry is your safeguard against incompatible schema changes and will be the component that ensures no breaking schema evolution will be possible. Kafka Brokers do not look at your payload and your payload schema, and therefore will not reject data

NEW QUESTION 18
......

Recommend!! Get the Full CCDAK dumps in VCE and PDF From Dumps-hub.com, Welcome to Download: https://www.dumps-hub.com/CCDAK-dumps.html (New 150 Q&As Version)