aiotestking uk

BDS-C00 Exam Questions - Online Test


BDS-C00 Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

Our pass rate is high to 98.9% and the similarity percentage between our BDS-C00 study guide and real exam is 90% based on our seven-year educating experience. Do you want achievements in the Amazon-Web-Services BDS-C00 exam in just one try? I am currently studying for the Amazon-Web-Services BDS-C00 exam. Latest Amazon-Web-Services BDS-C00 Test exam practice questions and answers, Try Amazon-Web-Services BDS-C00 Brain Dumps First.

Online BDS-C00 free questions and answers of New Version:

NEW QUESTION 1
A solutions architect for a logistics organization ships packages from thousands of suppliers to end
customers. The architect is building a platform where suppliers can view the status of one or more of their shipments. Each supplier can have multiple roles that will only allow access to specific fields in the resulting information.
Which strategy allows the appropriate level of access control and requires the LEAST amount of
management work?

  • A. Send the tracking data to Amazon Kinesis Stream
  • B. Use AWS Lambda to store the data in an Amazon DynamoDB Tabl
  • C. Generate temporary AWS credentials for the supplier’s users with AWS STS, specifying fine-grained security policies to limit access only to their application data.
  • D. Send the tracking data to Amazon Kinesis Firehous
  • E. Use Amazon S3 notifications and AWS Lambda to prepare files in Amazon S3 with appropriate data for each supplier’s role
  • F. Generate temporary AWS credentials for the suppliers’ users with AWS ST
  • G. Limit access to the appropriate files through security policies.
  • H. Send the tracking data to Amazon Kinesis Stream
  • I. Use Amazon EMR with Spark Streaming to store the data in HBas
  • J. Create one table per supplie
  • K. Use HBase Kerberos integration with the suppliers’ user
  • L. Use HBase ACL-based security to limit access to the roles to their specific table and columns.
  • M. Send the tracking data to Amazon Kinesis Firehos
  • N. Store the data in an Amazon Redshift cluste
  • O. Create views for the supplier’s users and role
  • P. Allow suppliers access to the Amazon Redshift cluster using a user limited to the application view.

Answer: B

NEW QUESTION 2
Your application uses CloudFormation to orchestrate your application’s resources. During your testing phase before application went live, your Amazon RDS instance type was changed and caused the instance to be re-created, resulting in the loss of test data.
How should you prevent this from occurring in the future?

  • A. Within the AWS CloudFormation parameter with which users can select the Amazon RDS instance type, set AllowedValues to only contain the current instance type
  • B. Use an AWS CloudFormation stack policy to deny updates to the instanc
  • C. Only allow UpdateStack permission to IAM principles that are denied SetStackPolicy
  • D. In the AWS CloudFormation template, set the AWS::RDS::DBInstance’s DBInstanceClass property to be read-only
  • E. Subscribe to the AWS CloudFormation notification “BeforeResourceUpdate” and call CancelStackUpdate if the resource identified is the Amazon RDS instance
  • F. In the AWS ClousFormation template, set the DeletionPolicy of the AWS::RDS::DBInstance’s DeletionPolicy property to “Retain”

Answer: E

NEW QUESTION 3
A company is centralizing a large number of unencrypted small files rom multiple Amazon S3 buckets. The company needs to verify that the files contain the same data after centralization.
Which method meets the requirements?

  • A. Company the S3 Etags from the source and destination objects
  • B. Call the S3 CompareObjects API for the source and destination objects
  • C. Place a HEAD request against the source and destination objects comparing SIG v4 header
  • D. Compare the size of the source and destination objects

Answer: B

NEW QUESTION 4
A media advertising company handles a large number of real-time messages sourced from over 200
websites. The company’s data engineer needs to collect and process records in real time for analysis using Spark Streaming on Amazon Elastic MapReduce (EMR). The data engineer needs to fulfill a corporate mandate to keep ALL raw messages as they are received as a top priority.
Which Amazon Kinesis configuration meets these requirements?

  • A. Publish messages to Amazon Kinesis Firehose backed by Amazon Simple Storage Service (S3). Pull messages off Firehose with Spark Streaming in parallel to persistence to Amazon S3
  • B. Publish messages to Amazon Kinesis Stream
  • C. Pull messages off Stream with Spark Streaming in parallel to AWS messages from Streams to Firehose backed by Amazon Simple Storage Service (S3)
  • D. Publish messages to Amazon Kinesis Firehose backed by Amazon Simple Storage (S3). Use AWS Lambda messages from Firehose to Streams for processing with Spark Streaming
  • E. Publish messages to Amazon Kinesis Streams, pull messages off with Spark Streaming and write data new data to Amazon Simple Storage Service (S3) before and after processing

Answer: D

NEW QUESTION 5
You have an Auto Scaling group associated with an Elastic Load Balancer (ELB). You have noticed that
instances launched via the Auto Scaling group are being marked unhealthy due to an ELB health check, but these unhealthy instances are not being terminated.
What do you need to do to ensure trial instances marked unhealthy by the ELB will be terminated
and replaced?

  • A. Change the thresholds set on the Auto Scaling group health check
  • B. Add an Elastic Load Balancing health check to your Auto Scaling group
  • C. Increase the value for the Health check interval set on the Elastic Load Balancer
  • D. Change the health check set on the Elastic Load Balancer to use TCP rather than HTTP checks

Answer: B

NEW QUESTION 6
When will you incur costs with an Elastic IP address (EIP)?

  • A. When an EIP is allocated
  • B. When it is allocated and associated with a running instance
  • C. When it is allocated and associated with a stopped instance
  • D. Costs are incurred regardless of whether the EIP associated with a running instance

Answer: C

NEW QUESTION 7
The majority of your Infrastructure is on premises and you have a small footprint on AWS Your
company has decided to roll out a new application that is heavily dependent on low latency connectivity to LOAP for authentication Your security policy requires minimal changes to the company's existing application user management processes.
What option would you implement to successfully launch this application1?

  • A. Create a second, independent LOAP server in AWS for your application to use for authentication
  • B. Establish a VPN connection so your applications can authenticate against your existing on- premises LDAP servers
  • C. Establish a VPN connection between your data center and AWS create a LDAP replica on AWS and configure your application to use the LDAP replica for authentication
  • D. Create a second LDAP domain on AWS establish a VPN connection to establish a trust relationship between your new and existing domains and use the new domain for authentication

Answer: C

NEW QUESTION 8
An administrator tries to use the Amazon Machine Learning service to classify social media posts that mention the administrator’s company into posts that requires a response and posts that do not. The training dataset of 10,000 posts contains the details of each post including the timestamp, author, and full text of the post. The administrator is missing the target labels that are required for training. Which Amazon Machine Learning model is the most appropriate for the task?

  • A. Unary classification model, where the target class is the require-response post
  • B. Binary classification model, where the two classes are require-response and does-not-require- response
  • C. Multi-class prediction model, with two classes require-response and does-not-require response
  • D. Regression model where the predicted value is the probability that the post requires a response

Answer: B

NEW QUESTION 9
An Amazon Redshift Database is encrypted using KMS. A data engineer needs to use the AWS CLI to
create a KMS encrypted snapshot of the database in another AWS region.
Which three steps should the data engineer take to accomplish this task? (Select Three.)

  • A. Create a new KMS key in the destination region
  • B. Copy the existing KMS key to the destination region
  • C. Use CreateSnapshotCopyGrant to allow Amazon Redshift to use the KMS key created in the destination region
  • D. Use CreateSnapshotCopyGrant to allow Amazon Redshift to use the KMS key from the source region
  • E. In the source, enable cross-region replication and specify the name of the copy grant created
  • F. In the destination region, enable cross-region replication and specify the name of the copy grant created

Answer: BDF

NEW QUESTION 10
Managers in a company need access to the human resources database that runs on Amazon Redshift, to run reports about their employees. Managers must only see information about their direct reports.
Which technique should be used to address this requirement with Amazon Redshift?

  • A. Define an IAM group for each employee as an IAM user in that group and use that to limit theaccess.
  • B. Use Amazon Redshift snapshot to create one cluster per manage
  • C. Allow the managers to access only their designated clusters.
  • D. Define a key for each manager in AWS KMS and encrypt the data for their employees with their private keys.
  • E. Define a view that uses the employee’s manager name to filter the records based on current user names.

Answer: B

NEW QUESTION 11
A company generates a large number of files each month and needs to use AWS import/export to
move these files into Amazon S3 storage. To satisfy the auditors, the company needs to keep a record of which files were imported into Amazon S3.
What is a low-cost way to create a unique log for each import job?

  • A. Use the same log file prefix in the import/export manifest files to create a versioned log file in Amazon S3 for all imports
  • B. Use the log file prefix in the import/export manifest file to create a unique log file in Amazon S3 for each import
  • C. Use the log file checksum in the import/export manifest file to create a log file in Amazon S3 for each import
  • D. Use script to iterate over files in Amazon S3 to generate a log after each import/export job

Answer: B

NEW QUESTION 12
You currently run your infrastructure on Amazon EC2 instances behind on Auto Scaling group. All logs
for your application are currently written to ephemeral storage. Recently your company experienced a major bug in code that made it through testing and was ultimately deployed to your fleet. This bug triggered your Auto Scaling group to scale up and back down before you could successfully retrieve the logs off your server to better assist you in troubleshooting the bug.
Which technique should you use to make sure you are able to review your logs after your instances have shut down?

  • A. Configure the ephemeral policies on your Auto Scaling group to back up on terminate
  • B. Configure your Auto Scaling policies to create a snapshot of all ephemeral storage on terminate
  • C. Install the CloudWatch logs Agent on your AMI, and configure CloudWatch Logs Agent to stream your logs
  • D. Install the CloudWatch monitoring agent on your AMI, and set up a new SNS alert for CloudWatch metrics that triggers the CloudWatch monitoring agent to backup all logs on the ephemeral drive
  • E. Install the CloudWatch Logs Agent on your AM
  • F. Update your Scaling policy to enable automated CloudWatch Log copy

Answer: C

NEW QUESTION 13
You have been asked to use your department’s existing continuous integration (CI) tool to test a
three- tier web architecture defined in an AWS CloudFormation template. The tool already supports AWS APIs and can launch new AWS CloudFormation stacks after polling version control. The CI tool reports on the success of the AWS CloudFormation stack creation by using the DescribeStacks API to look for the CREATE_COMPLETE status.
The architecture tiers defined in the template consist of:
. One load balancer
. Five Amazon EC2 instances running the web application
. One multi-AZ Amazon RDS instance How would you implement this? Choose 2 answers

  • A. Define a WaitCondition and a WaitConditionhandle for the output of a output of a UserData command that does sanity checking of the application’s post-install state
  • B. Define a CustomResource and write a script that runs architecture-level integration tests through the load balancer to the application and database for the state of multiple tiers
  • C. Define a WaitCondition and use a WaitConditionHandle that leverages the AWS SDK to run the DescribeStacks API call until the CREATE_COMPLETE status is returned
  • D. Define a CustomResource that leverages the AWS SDK to run the DescribeStacks API call until the CREATE_COMPLETE status is returned
  • E. Define a UserDataHandle for the output of a UserData command that does sanity checking of the application’s post-install state and runs integration tests on the state of multiple tiers through load balancer to the application
  • F. Define a UserDataHandle for the output of a CustomResource that does sanity checking of the application’s post-install state

Answer: AF

NEW QUESTION 14
You have been tasked with implementing an automated data backup solution for your application
servers that run on Amazon EC2 with Amazon EBS volumes. You want to use a distributed data store for your backups to avoid single points of failure and to increase the durability of the data. Daily backups should be retained for 30 days so that you can restore data within an hour.
How can you implement this through a script that a scheduling deamon runs daily on the application servers?

  • A. Write the script to call the ec2-create-volume API, tag the Amazon EBS volume with the current data time group, and copy backup data to a second Amazon EBS volum
  • B. Use the ec2-describe- volumes API to enumerate existing backup volume
  • C. Call the ec2-delete-volume API to prune backup volumes that are tagged with a date-time group older than 30 days
  • D. Write the script to call the Amazon Glacier upload archive API, and tag the backup archive with the current date-time grou
  • E. Use the list vaults API to enumerate existing backup archive
  • F. Call the delete vault API to prune backup archives that are tagged with a date-time group older than30 days
  • G. Write the script to call the ec2-create-snapshot API, and tag the Amazon EBS snapshot with the current date-time grou
  • H. Use the ec2-describe-snapshot API to enumerate existing Amazon EBS snapshot
  • I. Call the ec2-delete-snapshot API to prune Amazon EBs snapshots that are tagged with a date-time group older than 30 days
  • J. Write the script to call the ec2-create-volume API, tag the Amazon EBS volume with the current date-time group, and use the ec2-copy-snapshot API to backup data to the new Amazon EBS volum
  • K. Use the ec2-describe-snapshot API to enumerate existing backup volume
  • L. Call the ec2- delete- snapshot API to prune backup Amazon EBS volumes that are tagged with a date-time group older than 30 days

Answer: C

NEW QUESTION 15
A solutions architect works for a company that has a data lake based on a central Amazon S3 bucket.
The data contains sensitive information. The architect must be able to specify exactly which files each user can access. Users access the platform through SAML federation Single Sign On platform.
The architect needs to build a solution that allows fine grained access control, traceability of access to the objects, and usage of the standard tools (AWS Console, AWS CLI) to access the data.
Which solution should the architect build?

  • A. Use Amazon S3 Server-Side Encryption with AWS KMS-Managed Keys for strong data.Use AWS KMS to allow access to specific elements of the platfor
  • B. Use AWS CloudTrail for auditing
  • C. Use Amazon S3 Server-Side Encryption with Amazon S3 Managed Key
  • D. Set Amazon S3ACI to allow access to specific elements of the platfor
  • E. Use Amazon S3 to access logs for auditing
  • F. Use Amazon S3 Client-Side Encryption with Client-Side Master Ke
  • G. Set Amazon S3 ACI to allow access to specific elements of the platfor
  • H. Use Amazon S3 access logs for auditing
  • I. Use Amazon S3 Client-Side Encryption with AWS KMS-Managed keys for storing data.Use AMS KWS to allow access to specific elements of the platfor
  • J. Use AWS CloudTrail for auditing

Answer: B

NEW QUESTION 16
A social media customer has data from different data sources including RDS running MySQL, RedShift, and Hive on EMR. To support better analysis, the customer needs to be able to analyze data from different data sources and to combine the results.
What is the most cost-effective solution to meet these requirements?

  • A. Load all data from a different database/warehouse to S3. Use Redshift COPY command to copy data to Redshift for analysis.
  • B. Install Presto on the EMR cluster where Hive sit
  • C. Configure MySQL and PostgreSQL connector to select from different data sources in a single query.
  • D. Spin up an Elasticsearch cluste
  • E. Load data from all three data sources and use Kibana to analyze.
  • F. Write a program running on a separate EC2 instance to run queries to three different system
  • G. Aggregate the results after getting the responses from all three systems.

Answer: D

NEW QUESTION 17
When you put objects in Amazon 53, what is the indication that an object was successfully stored?

  • A. A HTTP 200 result code and MD5 checksum, taken together, indicate that the operation was successful
  • B. A success code is inserted into the S3 object metadata
  • C. Amazon S3 is engineered for 99.999999999% durabilit
  • D. Therefore there is no need to confirm that data was inserted.
  • E. Each S3 account has a special bucket named_ s3_log
  • F. Success codes are written to this bucket with a timestamp and checksum

Answer: A

NEW QUESTION 18
A company hosts a portfolio of e-commerce websites across the Oregon, N.Virginia, Ireland and
Sydney AWS regions. Each site keeps log files that captures user behavior. The company has built an application that generates batches of product recommendations with collaborative filtering in Oregon. Oregon was selected because the flagship site is hosted there and provides the largest collection of data to train machine learning models against. The other regions do NOT have enough historic data to train accurate machine learning models.
Which set of data processing steps improves recommendations for each region?

  • A. Use the e-commerce application in Oregon to write replica log files in each other region
  • B. Use Amazon S3 bucket replication to consolidate log entries and builds a single model in Oregon
  • C. Use Kinesis as a butler for web logs and replicate logs to the Kinesis streams of a neighboring region
  • D. Use the CloudWatch Logs agent to consolidate logs into a single CloudWatch logs group

Answer: C

NEW QUESTION 19
A user has created a launch configuration for Auto Scaling where CloudWatch detailed monitoring is
disabled. The user wants to now enable detailed monitoring. How can the user achieve this?

  • A. Update the Launch config with CLI to set InstanceMonitoringDisabled = false
  • B. The user should change the Auto Scaling group from the AWS console to enable detailed monitoring
  • C. Update the Launch config with CLI to set InstanceMonitoring.Enabled = true
  • D. Create a new Launch Config with detail monitoring enabled and update the Auto Scaling group

Answer: D

NEW QUESTION 20
You need to design a VPC for a web-application consisting of an Elastic Load Balancer (ELB). A fleet of web/application servers, and an RDS database The Entire Infrastructure must be distributed over 2 availability zones.
Which VPC configuration works while assuring the database is not available from the Internet?

  • A. One public subnet for ELB one public subnet for the web-servers, and one private subnet for the database
  • B. One public subnet for ELB two private subnets for the web-servers, two private subnets for RDS
  • C. Two public subnets for ELB two private subnets for the web-servers and two private subnets for RDS
  • D. Two public subnets for ELB two public subnets for the web-servers, and two public subnets for RDS

Answer: C

NEW QUESTION 21
A user has launched an EC2 instance from an instance store backed AMI. The user has attached an
additional instance store volume to the instance. The user wants to create an AMI from the running instance. Will the AMI have the additional instance store volume data?

  • A. Yes, the block device mapping will have information about the additional instance store volume
  • B. No, since the instance store backed AMI can have only the root volume bundled
  • C. It is not possible to attach an additional instance store volume to the existing instance store backed AMI instance
  • D. No, since this is ephermal storage it will not be a part of the AMI

Answer: A

NEW QUESTION 22
......

P.S. Easily pass BDS-C00 Exam with 264 Q&As Simply pass Dumps & pdf Version, Welcome to Download the Newest Simply pass BDS-C00 Dumps: https://www.simply-pass.com/Amazon-Web-Services-exam/BDS-C00-dumps.html (264 New Questions)