aiotestking uk

DAS-C01 Exam Questions - Online Test


DAS-C01 Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

Your success in Amazon-Web-Services DAS-C01 is our sole target and we develop all our DAS-C01 braindumps in a way that facilitates the attainment of this target. Not only is our DAS-C01 study material the best you can find, it is also the most detailed and the most updated. DAS-C01 Practice Exams for Amazon-Web-Services DAS-C01 are written to the highest standards of technical accuracy.

Also have DAS-C01 free dumps questions for you:

NEW QUESTION 1
A company has collected more than 100 TB of log files in the last 24 months. The files are stored as raw text in a dedicated Amazon S3 bucket. Each object has a key of the form year-month-day_log_HHmmss.txt where HHmmss represents the time the log file was initially created. A table was created in Amazon Athena that points to the S3 bucket. One-time queries are run against a subset of columns in the table several times an hour.
A data analyst must make changes to reduce the cost of running these queries. Management wants a solution with minimal maintenance overhead.
Which combination of steps should the data analyst take to meet these requirements? (Choose three.)

  • A. Convert the log files to Apace Avro format.
  • B. Add a key prefix of the form date=year-month-day/ to the S3 objects to partition the data.
  • C. Convert the log files to Apache Parquet format.
  • D. Add a key prefix of the form year-month-day/ to the S3 objects to partition the data.
  • E. Drop and recreate the table with the PARTITIONED BY claus
  • F. Run the ALTER TABLE ADD PARTITION statement.
  • G. Drop and recreate the table with the PARTITIONED BY claus
  • H. Run the MSCK REPAIR TABLE statement.

Answer: BCF

NEW QUESTION 2
A large telecommunications company is planning to set up a data catalog and metadata management for multiple data sources running on AWS. The catalog will be used to maintain the metadata of all the objects stored in the data stores. The data stores are composed of structured sources like Amazon RDS and Amazon Redshift, and semistructured sources like JSON and XML files stored in Amazon S3. The catalog must be updated on a regular basis, be able to detect the changes to object metadata, and require the least possible administration.
Which solution meets these requirements?

  • A. Use Amazon Aurora as the data catalo
  • B. Create AWS Lambda functions that will connect and gather themetadata information from multiple sources and update the data catalog in Auror
  • C. Schedule the Lambda functions periodically.
  • D. Use the AWS Glue Data Catalog as the central metadata repositor
  • E. Use AWS Glue crawlers to connect to multiple data stores and update the Data Catalog with metadata change
  • F. Schedule the crawlers periodically to update the metadata catalog.
  • G. Use Amazon DynamoDB as the data catalo
  • H. Create AWS Lambda functions that will connect and gather the metadata information from multiple sources and update the DynamoDB catalo
  • I. Schedule the Lambda functions periodically.
  • J. Use the AWS Glue Data Catalog as the central metadata repositor
  • K. Extract the schema for RDS and Amazon Redshift sources and build the Data Catalo
  • L. Use AWS crawlers for data stored in Amazon S3 to infer the schema and automatically update the Data Catalog.

Answer: D

NEW QUESTION 3
A media analytics company consumes a stream of social media posts. The posts are sent to an Amazon Kinesis data stream partitioned on user_id. An AWS Lambda function retrieves the records and validates the content before loading the posts into an Amazon Elasticsearch cluster. The validation process needs to receive the posts for a given user in the order they were received. A data analyst has noticed that, during peak hours, the social media platform posts take more than an hour to appear in the Elasticsearch cluster.
What should the data analyst do reduce this latency?

  • A. Migrate the validation process to Amazon Kinesis Data Firehose.
  • B. Migrate the Lambda consumers from standard data stream iterators to an HTTP/2 stream consumer.
  • C. Increase the number of shards in the stream.
  • D. Configure multiple Lambda functions to process the stream.

Answer: D

NEW QUESTION 4
A company is hosting an enterprise reporting solution with Amazon Redshift. The application provides reporting capabilities to three main groups: an executive group to access financial reports, a data analyst group to run long-running ad-hoc queries, and a data engineering group to run stored procedures and ETL processes. The executive team requires queries to run with optimal performance. The data engineering team expects queries to take minutes.
Which Amazon Redshift feature meets the requirements for this task?

  • A. Concurrency scaling
  • B. Short query acceleration (SQA)
  • C. Workload management (WLM)
  • D. Materialized views

Answer: D

Explanation:

Materialized views:

NEW QUESTION 5
A company hosts an on-premises PostgreSQL database that contains historical data. An internal legacy application uses the database for read-only activities. The company’s business team wants to move the data to a data lake in Amazon S3 as soon as possible and enrich the data for analytics.
The company has set up an AWS Direct Connect connection between its VPC and its on-premises network. A data analytics specialist must design a solution that achieves the business team’s goals with the least operational overhead.
Which solution meets these requirements?

  • A. Upload the data from the on-premises PostgreSQL database to Amazon S3 by using a customized batch upload proces
  • B. Use the AWS Glue crawler to catalog the data in Amazon S3. Use an AWS Glue job to enrich and store the result in a separate S3 bucket in Apache Parquet forma
  • C. Use Amazon Athena to query the data.
  • D. Create an Amazon RDS for PostgreSQL database and use AWS Database Migration Service (AWS DMS) to migrate the data into Amazon RD
  • E. Use AWS Data Pipeline to copy and enrich the data from the Amazon RDS for PostgreSQL table and move the data to Amazon S3. Use Amazon Athena to querythe data.
  • F. Configure an AWS Glue crawler to use a JDBC connection to catalog the data in the on-premises databas
  • G. Use an AWS Glue job to enrich the data and save the result to Amazon S3 in Apache Parquet forma
  • H. Create an Amazon Redshift cluster and use Amazon Redshift Spectrum to query the data.
  • I. Configure an AWS Glue crawler to use a JDBC connection to catalog the data in the on-premises databas
  • J. Use an AWS Glue job to enrich the data and save the result to Amazon S3 in Apache Parquet forma
  • K. Use Amazon Athena to query the data.

Answer: B

NEW QUESTION 6
A company stores its sales and marketing data that includes personally identifiable information (PII) in Amazon S3. The company allows its analysts to launch their own Amazon EMR cluster and run analytics reports with the data. To meet compliance requirements, the company must ensure the data is not publicly accessible throughout this process. A data engineer has secured Amazon S3 but must ensure the individual EMR clusters created by the analysts are not exposed to the public internet.
Which solution should the data engineer to meet this compliance requirement with LEAST amount of effort?

  • A. Create an EMR security configuration and ensure the security configuration is associated with the EMR clusters when they are created.
  • B. Check the security group of the EMR clusters regularly to ensure it does not allow inbound traffic from IPv4 0.0.0.0/0 or IPv6 ::/0.
  • C. Enable the block public access setting for Amazon EMR at the account level before any EMR cluster is created.
  • D. Use AWS WAF to block public internet access to the EMR clusters across the board.

Answer: C

Explanation:
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-block-public-access.html

NEW QUESTION 7
A company wants to enrich application logs in near-real-time and use the enriched dataset for further analysis. The application is running on Amazon EC2 instances across multiple Availability Zones and storing its logs using Amazon CloudWatch Logs. The enrichment source is stored in an Amazon DynamoDB table.
Which solution meets the requirements for the event collection and enrichment?

  • A. Use a CloudWatch Logs subscription to send the data to Amazon Kinesis Data Firehos
  • B. Use AWS Lambda to transform the data in the Kinesis Data Firehose delivery stream and enrich it with the data in the DynamoDB tabl
  • C. Configure Amazon S3 as the Kinesis Data Firehose delivery destination.
  • D. Export the raw logs to Amazon S3 on an hourly basis using the AWS CL
  • E. Use AWS Glue crawlers to catalog the log
  • F. Set up an AWS Glue connection for the DynamoDB table and set up an AWS Glue ETL job to enrich the dat
  • G. Store the enriched data in Amazon S3.
  • H. Configure the application to write the logs locally and use Amazon Kinesis Agent to send the data to Amazon Kinesis Data Stream
  • I. Configure a Kinesis Data Analytics SQL application with the Kinesis data stream as the sourc
  • J. Join the SQL application input stream with DynamoDB records, and then store the enriched output stream in Amazon S3 using Amazon Kinesis Data Firehose.
  • K. Export the raw logs to Amazon S3 on an hourly basis using the AWS CL
  • L. Use Apache Spark SQL on Amazon EMR to read the logs from Amazon S3 and enrich the records with the data from DynamoD
  • M. Store the enriched data in Amazon S3.

Answer: A

Explanation:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#FirehoseExample

NEW QUESTION 8
A company has an encrypted Amazon Redshift cluster. The company recently enabled Amazon Redshift audit logs and needs to ensure that the audit logs are also encrypted at rest. The logs are retained for 1 year. The auditor queries the logs once a month.
What is the MOST cost-effective way to meet these requirements?

  • A. Encrypt the Amazon S3 bucket where the logs are stored by using AWS Key Management Service (AWS KMS). Copy the data into the Amazon Redshift cluster from Amazon S3 on a daily basi
  • B. Query the data as required.
  • C. Disable encryption on the Amazon Redshift cluster, configure audit logging, and encrypt the Amazon Redshift cluste
  • D. Use Amazon Redshift Spectrum to query the data as required.
  • E. Enable default encryption on the Amazon S3 bucket where the logs are stored by using AES-256 encryptio
  • F. Copy the data into the Amazon Redshift cluster from Amazon S3 on a daily basi
  • G. Query the data as required.
  • H. Enable default encryption on the Amazon S3 bucket where the logs are stored by using AES-256 encryptio
  • I. Use Amazon Redshift Spectrum to query the data as required.

Answer: A

NEW QUESTION 9
A company uses the Amazon Kinesis SDK to write data to Kinesis Data Streams. Compliance requirements state that the data must be encrypted at rest using a key that can be rotated. The company wants to meet this encryption requirement with minimal coding effort.
How can these requirements be met?

  • A. Create a customer master key (CMK) in AWS KM
  • B. Assign the CMK an alia
  • C. Use the AWS Encryption SDK, providing it with the key alias to encrypt and decrypt the data.
  • D. Create a customer master key (CMK) in AWS KM
  • E. Assign the CMK an alia
  • F. Enable server-side encryption on the Kinesis data stream using the CMK alias as the KMS master key.
  • G. Create a customer master key (CMK) in AWS KM
  • H. Create an AWS Lambda function to encrypt and decrypt the dat
  • I. Set the KMS key ID in the function’s environment variables.
  • J. Enable server-side encryption on the Kinesis data stream using the default KMS key for Kinesis Data Streams.

Answer: B

NEW QUESTION 10
A marketing company is using Amazon EMR clusters for its workloads. The company manually installs third party libraries on the clusters by logging in to the master nodes. A data analyst needs to create an automated solution to replace the manual process.
Which options can fulfill these requirements? (Choose two.)

  • A. Place the required installation scripts in Amazon S3 and execute them using custom bootstrap actions.
  • B. Place the required installation scripts in Amazon S3 and execute them through Apache Spark in Amazon EMR.
  • C. Install the required third-party libraries in the existing EMR master nod
  • D. Create an AMI out of that master node and use that custom AMI to re-create the EMR cluster.
  • E. Use an Amazon DynamoDB table to store the list of required application
  • F. Trigger an AWS Lambda function with DynamoDB Streams to install the software.
  • G. Launch an Amazon EC2 instance with Amazon Linux and install the required third-party libraries on the instanc
  • H. Create an AMI and use that AMI to create the EMR cluster.

Answer: AE

Explanation:
https://aws.amazon.com/about-aws/whats-new/2017/07/amazon-emr-now-supports-launching-clusters-with-cust https://docs.aws.amazon.com/de_de/emr/latest/ManagementGuide/emr-plan-bootstrap.html

NEW QUESTION 11
A hospital uses wearable medical sensor devices to collect data from patients. The hospital is architecting a near-real-time solution that can ingest the data securely at scale. The solution should also be able to remove the patient’s protected health information (PHI) from the streaming data and store the data in durable storage.
Which solution meets these requirements with the least operational overhead?

  • A. Ingest the data using Amazon Kinesis Data Streams, which invokes an AWS Lambda function using Kinesis Client Library (KCL) to remove all PH
  • B. Write the data in Amazon S3.
  • C. Ingest the data using Amazon Kinesis Data Firehose to write the data to Amazon S3. Have Amazon S3 trigger an AWS Lambda function that parses the sensor data to remove all PHI in Amazon S3.
  • D. Ingest the data using Amazon Kinesis Data Streams to write the data to Amazon S3. Have the data stream launch an AWS Lambda function that parses the sensor data and removes all PHI in Amazon S3.
  • E. Ingest the data using Amazon Kinesis Data Firehose to write the data to Amazon S3. Implement a transformation AWS Lambda function that parses the sensor data to remove all PHI.

Answer: D

Explanation:
https://aws.amazon.com/blogs/big-data/persist-streaming-data-to-amazon-s3-using-amazon-kinesis-firehose-and

NEW QUESTION 12
A media content company has a streaming playback application. The company wants to collect and analyze the data to provide near-real-time feedback on playback issues. The company needs to consume this data and return results within 30 seconds according to the service-level agreement (SLA). The company needs the consumer to identify playback issues, such as quality during a specified timeframe. The data will be emitted as JSON and may change schemas over time.
Which solution will allow the company to collect data for processing while meeting these requirements?

  • A. Send the data to Amazon Kinesis Data Firehose with delivery to Amazon S3. Configure an S3 event trigger an AWS Lambda function to process the dat
  • B. The Lambda function will consume the data and process it to identify potential playback issue
  • C. Persist the raw data to Amazon S3.
  • D. Send the data to Amazon Managed Streaming for Kafka and configure an Amazon Kinesis Analytics for Java application as the consume
  • E. The application will consume the data and process it to identify potential playback issue
  • F. Persist the raw data to Amazon DynamoDB.
  • G. Send the data to Amazon Kinesis Data Firehose with delivery to Amazon S3. Configure Amazon S3 to trigger an event for AWS Lambda to proces
  • H. The Lambda function will consume the data and process it to identify potential playback issue
  • I. Persist the raw data to Amazon DynamoDB.
  • J. Send the data to Amazon Kinesis Data Streams and configure an Amazon Kinesis Analytics for Java application as the consume
  • K. The application will consume the data and process it to identify potential playback issue
  • L. Persist the raw data to Amazon S3.

Answer: D

Explanation:
https://aws.amazon.com/blogs/aws/new-amazon-kinesis-data-analytics-for-java/

NEW QUESTION 13
A company is planning to do a proof of concept for a machine learning (ML) project using Amazon SageMaker with a subset of existing on-premises data hosted in the company’s 3 TB data warehouse. For part of the project, AWS Direct Connect is established and tested. To prepare the data for ML, data analysts are performing data curation. The data analysts want to perform multiple step, including mapping, dropping null fields, resolving choice, and splitting fields. The company needs the fastest solution to curate the data for this project.
Which solution meets these requirements?

  • A. Ingest data into Amazon S3 using AWS DataSync and use Apache Spark scrips to curate the data in an Amazon EMR cluste
  • B. Store the curated data in Amazon S3 for ML processing.
  • C. Create custom ETL jobs on-premises to curate the dat
  • D. Use AWS DMS to ingest data into Amazon S3 for ML processing.
  • E. Ingest data into Amazon S3 using AWS DM
  • F. Use AWS Glue to perform data curation and store the data in Amazon S3 for ML processing.
  • G. Take a full backup of the data store and ship the backup files using AWS Snowbal
  • H. Upload Snowball data into Amazon S3 and schedule data curation jobs using AWS Batch to prepare the data for ML.

Answer: C

NEW QUESTION 14
A company uses Amazon Redshift for its data warehousing needs. ETL jobs run every night to load data, apply business rules, and create aggregate tables for reporting. The company's data analysis, data science, and business intelligence teams use the data warehouse during regular business hours. The workload management is set to auto, and separate queues exist for each team with the priority set to NORMAL.
Recently, a sudden spike of read queries from the data analysis team has occurred at least twice daily, and queries wait in line for cluster resources. The company needs a solution that enables the data analysis team to avoid query queuing without impacting latency and the query times of other teams.
Which solution meets these requirements?

  • A. Increase the query priority to HIGHEST for the data analysis queue.
  • B. Configure the data analysis queue to enable concurrency scaling.
  • C. Create a query monitoring rule to add more cluster capacity for the data analysis queue when queries are waiting for resources.
  • D. Use workload management query queue hopping to route the query to the next matching queue.

Answer: D

NEW QUESTION 15
A company wants to improve user satisfaction for its smart home system by adding more features to its recommendation engine. Each sensor asynchronously pushes its nested JSON data into Amazon Kinesis Data Streams using the Kinesis Producer Library (KPL) in Java. Statistics from a set of failed sensors showed that, when a sensor is malfunctioning, its recorded data is not always sent to the cloud.
The company needs a solution that offers near-real-time analytics on the data from the most updated sensors. Which solution enables the company to meet these requirements?

  • A. Set the RecordMaxBufferedTime property of the KPL to "1" to disable the buffering on the sensor side.Use Kinesis Data Analytics to enrich the data based on a company-developed anomaly detection SQL scrip
  • B. Push the enriched data to a fleet of Kinesis data streams and enable the data transformation feature to flatten the JSON fil
  • C. Instantiate a dense storage Amazon Redshift cluster and use it as the destination for the Kinesis Data Firehose delivery stream.
  • D. Update the sensors code to use the PutRecord/PutRecords call from the Kinesis Data Streams API with the AWS SDK for Jav
  • E. Use Kinesis Data Analytics to enrich the data based on a company-developed anomaly detection SQL scrip
  • F. Direct the output of KDA application to a Kinesis Data Firehose delivery stream, enable the data transformation feature to flatten the JSON file, and set the Kinesis Data Firehose destination to an Amazon Elasticsearch Service cluster.
  • G. Set the RecordMaxBufferedTime property of the KPL to "0" to disable the buffering on the sensor side.Connect for each stream a dedicated Kinesis Data Firehose delivery stream and enable the data transformation feature to flatten the JSON file before sending it to an Amazon S3 bucke
  • H. Load the S3 data into an Amazon Redshift cluster.
  • I. Update the sensors code to use the PutRecord/PutRecords call from the Kinesis Data Streams API withthe AWS SDK for Jav
  • J. Use AWS Glue to fetch and process data from the stream using the Kinesis Client Library (KCL). Instantiate an Amazon Elasticsearch Service cluster and use AWS Lambda to directly push data into it.

Answer: B

Explanation:
https://docs.aws.amazon.com/streams/latest/dev/developing-producers-with-kpl.html
The KPL can incur an additional processing delay of up to RecordMaxBufferedTime within the library (user-configurable). Larger values of RecordMaxBufferedTime results in higher packing efficiencies and better performance. Applications that cannot tolerate this additional delay may need to use the AWS SDK directly.

NEW QUESTION 16
A global company has different sub-organizations, and each sub-organization sells its products and services in various countries. The company's senior leadership wants to quickly identify which sub-organization is the strongest performer in each country. All sales data is stored in Amazon S3 in Parquet format.
Which approach can provide the visuals that senior leadership requested with the least amount of effort?

  • A. Use Amazon QuickSight with Amazon Athena as the data sourc
  • B. Use heat maps as the visual type.
  • C. Use Amazon QuickSight with Amazon S3 as the data sourc
  • D. Use heat maps as the visual type.
  • E. Use Amazon QuickSight with Amazon Athena as the data sourc
  • F. Use pivot tables as the visual type.
  • G. Use Amazon QuickSight with Amazon S3 as the data sourc
  • H. Use pivot tables as the visual type.

Answer: A

NEW QUESTION 17
An online gaming company is using an Amazon Kinesis Data Analytics SQL application with a Kinesis data stream as its source. The source sends three non-null fields to the application: player_id, score, and us_5_digit_zip_code.
A data analyst has a .csv mapping file that maps a small number of us_5_digit_zip_code values to a territory code. The data analyst needs to include the territory code, if one exists, as an additional output of the Kinesis Data Analytics application.
How should the data analyst meet this requirement while minimizing costs?

  • A. Store the contents of the mapping file in an Amazon DynamoDB tabl
  • B. Preprocess the records as they arrive in the Kinesis Data Analytics application with an AWS Lambda function that fetches the mapping and supplements each record to include the territory code, if one exist
  • C. Change the SQL query in the application to include the new field in the SELECT statement.
  • D. Store the mapping file in an Amazon S3 bucket and configure the reference data column headers for the.csv file in the Kinesis Data Analytics applicatio
  • E. Change the SQL query in the application to include a join to the file’s S3 Amazon Resource Name (ARN), and add the territory code field to the SELECT columns.
  • F. Store the mapping file in an Amazon S3 bucket and configure it as a reference data source for the Kinesis Data Analytics applicatio
  • G. Change the SQL query in the application to include a join to the reference table and add the territory code field to the SELECT columns.
  • H. Store the contents of the mapping file in an Amazon DynamoDB tabl
  • I. Change the Kinesis DataAnalytics application to send its output to an AWS Lambda function that fetches the mapping and supplements each record to include the territory code, if one exist
  • J. Forward the record from the Lambda function to the original application destination.

Answer: C

NEW QUESTION 18
A transportation company uses IoT sensors attached to trucks to collect vehicle data for its global delivery fleet. The company currently sends the sensor data in small .csv files to Amazon S3. The files are then loaded into a 10-node Amazon Redshift cluster with two slices per node and queried using both Amazon Athena and Amazon Redshift. The company wants to optimize the files to reduce the cost of querying and also improve the speed of data loading into the Amazon Redshift cluster.
Which solution meets these requirements?

  • A. Use AWS Glue to convert all the files from .csv to a single large Apache Parquet fil
  • B. COPY the file into Amazon Redshift and query the file with Athena from Amazon S3.
  • C. Use Amazon EMR to convert each .csv file to Apache Avr
  • D. COPY the files into Amazon Redshift and query the file with Athena from Amazon S3.
  • E. Use AWS Glue to convert the files from .csv to a single large Apache ORC fil
  • F. COPY the file into Amazon Redshift and query the file with Athena from Amazon S3.
  • G. Use AWS Glue to convert the files from .csv to Apache Parquet to create 20 Parquet file
  • H. COPY the files into Amazon Redshift and query the files with Athena from Amazon S3.

Answer: D

NEW QUESTION 19
A company has a data warehouse in Amazon Redshift that is approximately 500 TB in size. New data is imported every few hours and read-only queries are run throughout the day and evening. There is a particularly heavy load with no writes for several hours each morning on business days. During those hours, some queries are queued and take a long time to execute. The company needs to optimize query execution and avoid any downtime.
What is the MOST cost-effective solution?

  • A. Enable concurrency scaling in the workload management (WLM) queue.
  • B. Add more nodes using the AWS Management Console during peak hour
  • C. Set the distribution style to ALL.
  • D. Use elastic resize to quickly add nodes during peak time
  • E. Remove the nodes when they are not needed.
  • F. Use a snapshot, restore, and resize operatio
  • G. Switch to the new target cluster.

Answer: A

Explanation:
https://docs.aws.amazon.com/redshift/latest/dg/cm-c-implementing-workload-management.html

NEW QUESTION 20
......

Thanks for reading the newest DAS-C01 exam dumps! We recommend you to try the PREMIUM Dumpscollection.com DAS-C01 dumps in VCE and PDF here: https://www.dumpscollection.net/dumps/DAS-C01/ (130 Q&As Dumps)