Your success in Confluent CCDAK is our sole target and we develop all our CCDAK braindumps in a way that facilitates the attainment of this target. Not only is our CCDAK study material the best you can find, it is also the most detailed and the most updated. CCDAK Practice Exams for Confluent CCDAK are written to the highest standards of technical accuracy.
Confluent CCDAK Free Dumps Questions Online, Read and Test Now.
NEW QUESTION 1
Which KSQL queries write to Kafka?
Answer: CD
Explanation:
SHOW STREAMS and EXPLAIN <query> statements run against the KSQL server that the KSQL client is connected to. They don't communicate directly with Kafka. CREATE STREAM WITH <topic> and CREATE TABLE WITH <topic> write metadata to the KSQL command topic. Persistent queries based on CREATE STREAM AS SELECT and CREATE TABLE AS SELECT read and write to Kafka topics. Non-persistent queries based on SELECT that are stateless only read from Kafka topics, for example SELECT … FROM foo WHERE …. Non-persistent queries that are stateful read and write to Kafka, for example, COUNT and JOIN. The data in Kafka is deleted automatically when you terminate the query with CTRL-C.
NEW QUESTION 2
You are running a Kafka Streams application in a Docker container managed by Kubernetes, and upon application restart, it takes a long time for the docker container to replicate the state and get back to processing the data. How can you improve dramatically the application restart?
Answer: A
Explanation:
Although any Kafka Streams application is stateless as the state is stored in Kafka, it can take a while and lots of resources to recover the state from Kafka. In order to speed up recovery, it is advised to store the Kafka Streams state on a persistent volume, so that only the missing part of the state needs to be recovered.
NEW QUESTION 3
What exceptions may be caught by the following producer? (select two) ProducerRecord<String, String> record =
new ProducerRecord<>("topic1", "key1", "value1"); try {
producer.send(record);
} catch (Exception e) { e.printStackTrace();
}
Answer: BD
Explanation:
These are the client side exceptions that may be encountered before message is sent to the broker, and before a future is returned by the .send() method.
NEW QUESTION 4
A producer is sending messages with null key to a topic with 6 partitions using the DefaultPartitioner. Where will the messages be stored?
Answer: A
Explanation:
Message with no keys will be stored with round-robin strategy among partitions.
NEW QUESTION 5
To produce data to a topic, a producer must provide the Kafka client with...
Answer: D
Explanation:
All brokers can respond to a Metadata request, so a client can connect to any broker in the cluster and then figure out on its own which brokers to send data to.
NEW QUESTION 6
You want to send a message of size 3 MB to a topic with default message size configuration. How does KafkaProducer handle large messages?
Answer: C
Explanation:
MessageSizeTooLarge is not a retryable exception.
NEW QUESTION 7
To import data from external databases, I should use
Answer: D
Explanation:
Kafka Connect Sink is used to export data from Kafka to external databases and Kafka
Connect Source is used to import from external databases into Kafka.
NEW QUESTION 8
A consumer is configured with enable.auto.commit=false. What happens when close() is called on the consumer object?
Answer: B
Explanation:
Calling close() on consumer immediately triggers a partition rebalance as the consumer will not be available anymore.
NEW QUESTION 9
How will you set the retention for the topic named ‚Ä??my-topic‚Ä?? to 1 hour?
Answer: C
Explanation:
retention.ms can be configured at topic level while creating topic or by altering topic. It shouldn't be set at the broker level (log.retention.ms) as this would impact all the topics in the cluster, not just the one we are interested in
NEW QUESTION 10
The exactly once guarantee in the Kafka Streams is for which flow of data?
Answer: A
Explanation:
Kafka Streams can only guarantee exactly once processing if you have a Kafka to Kafka topology.
NEW QUESTION 11
What is a generic unique id that I can use for messages I receive from a consumer?
Answer: B
Explanation:
(Topic,Partition,Offset) uniquely identifies a message in Kafka
NEW QUESTION 12
The Controller is a broker that is... (select two)
Answer: AB
Explanation:
Controller is a broker that in addition to usual broker functions is responsible for partition leader election. The election of that broker happens thanks to Zookeeper and at any time only one broker can be a controller
NEW QUESTION 13
By default, which replica will be elected as a partition leader? (select two)
Answer: BD
Explanation:
Preferred leader is a broker that was leader when topic was created. It is preferred because when partitions are first created, the leaders are balanced between brokers. Otherwise, any of the in-sync replicas (ISR) will be elected leader, as long as unclean.leader.election=false (by default)
NEW QUESTION 14
StreamsBuilder builder = new StreamsBuilder();
KStream<String, String> textLines = builder.stream("word-count-input"); KTable<String, Long> wordCounts = textLines
.mapValues(textLine -> textLine.toLowerCase())
.flatMapValues(textLine -> Arrays.asList(textLine.split("\W+")))
.selectKey((key, word) -> word)
.groupByKey()
.count(Materialized.as("Counts"));
wordCounts.toStream().to("word-count-output", Produced.with(Serdes.String(), Serdes.Long()));
builder.build();
What is an adequate topic configuration for the topic word-count-output?
Answer: D
Explanation:
Result is aggregated into a table with key as the unique word and value its frequency. We have to enable log compaction for this topic to align the topic's cleanup policy with KTable semantics.
NEW QUESTION 15
What Java library is KSQL based on?
Answer: A
Explanation:
KSQL is based on Kafka Streams and allows you to express transformations in the SQL language that get automatically converted to a Kafka Streams program in the backend
NEW QUESTION 16
How much should be the heap size of a broker in a production setup on a machine with 256 GB of RAM, in PLAINTEXT mode?
Answer: A
Explanation:
In Kafka, a small heap size is needed, while the rest of the RAM goes automatically to the page cache (managed by the OS). The heap size goes slightly up if you need to enable SSL
NEW QUESTION 17
To prevent network-induced duplicates when producing to Kafka, I should use
Answer: B
Explanation:
Producer idempotence helps prevent the network introduced duplicates. More details herehttps://cwiki.apache.org/confluence/display/KAFKA/Idempotent+Producer
NEW QUESTION 18
......
P.S. Dumps-hub.com now are offering 100% pass ensure CCDAK dumps! All CCDAK exam questions have been updated with correct answers: https://www.dumps-hub.com/CCDAK-dumps.html (150 New Questions)