aiotestking uk

DBS-C01 Exam Questions - Online Test


DBS-C01 Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

We provide real DBS-C01 exam questions and answers braindumps in two formats. Download PDF & Practice Tests. Pass Amazon-Web-Services DBS-C01 Exam quickly & easily. The DBS-C01 PDF type is available for reading and printing. You can print more and practice many times. With the help of our Amazon-Web-Services DBS-C01 dumps pdf and vce product and material, you can easily pass the DBS-C01 exam.

Free demo questions for Amazon-Web-Services DBS-C01 Exam Dumps Below:

NEW QUESTION 1
A company has deployed an e-commerce web application in a new AWS account. An Amazon RDS for MySQL Multi-AZ DB instance is part of this deployment with a
database-1.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com endpoint listening on port 3306. The company’s Database Specialist is able to log in to MySQL and run queries from the bastion host using these details.
When users try to utilize the application hosted in the AWS account, they are presented with a generic error message. The application servers are logging a “could not connect to server: Connection times out” error message to Amazon CloudWatch Logs.
What is the cause of this error?

  • A. The user name and password the application is using are incorrect.
  • B. The security group assigned to the application servers does not have the necessary rules to allow inbound connections from the DB instance.
  • C. The security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers.
  • D. The user name and password are correct, but the user is not authorized to use the DB instance.

Answer: C

NEW QUESTION 2
A Database Specialist is performing a proof of concept with Amazon Aurora using a small instance to confirm a simple database behavior. When loading a large dataset and creating the index, the Database Specialist encounters the following error message from Aurora:
ERROR: cloud not write block 7507718 of temporary file: No space left on device
What is the cause of this error and what should the Database Specialist do to resolve this issue?

  • A. The scaling of Aurora storage cannot catch up with the data loadin
  • B. The Database Specialist needs tomodify the workload to load the data slowly.
  • C. The scaling of Aurora storage cannot catch up with the data loadin
  • D. The Database Specialist needs toenable Aurora storage scaling.
  • E. The local storage used to store temporary tables is ful
  • F. The Database Specialist needs to scale up theinstance.
  • G. The local storage used to store temporary tables is ful
  • H. The Database Specialist needs to enable localstorage scaling.

Answer: C

NEW QUESTION 3
A Database Specialist needs to define a database migration strategy to migrate an on-premises Oracle database to an Amazon Aurora MySQL DB cluster. The company requires near-zero downtime for the data migration. The solution must also be cost-effective.
Which approach should the Database Specialist take?

  • A. Dump all the tables from the Oracle database into an Amazon S3 bucket using datapump (expdp).Rundata transformations in AWS Glu
  • B. Load the data from the S3 bucket to the Aurora DB cluster.
  • C. Order an AWS Snowball appliance and copy the Oracle backup to the Snowball applianc
  • D. Once theSnowball data is delivered to Amazon S3, create a new Aurora DB cluste
  • E. Enable the S3 integration tomigrate the data directly from Amazon S3 to Amazon RDS.
  • F. Use the AWS Schema Conversion Tool (AWS SCT) to help rewrite database objects to MySQL during theschema migratio
  • G. Use AWS DMS to perform the full load and change data capture (CDC) tasks.
  • H. Use AWS Server Migration Service (AWS SMS) to import the Oracle virtual machine image as an AmazonEC2 instanc
  • I. Use the Oracle Logical Dump utility to migrate the Oracle data from Amazon EC2 to anAurora DB cluster.

Answer: D

NEW QUESTION 4
A company has an Amazon RDS Multi-AZ DB instances that is 200 GB in size with an RPO of 6 hours. To meet the company’s disaster recovery policies, the database backup needs to be copied into another Region. The company requires the solution to be cost-effective and operationally efficient.
What should a Database Specialist do to copy the database backup into a different Region?

  • A. Use Amazon RDS automated snapshots and use AWS Lambda to copy the snapshot into another Region
  • B. Use Amazon RDS automated snapshots every 6 hours and use Amazon S3 cross-Region replication tocopy the snapshot into another Region
  • C. Create an AWS Lambda function to take an Amazon RDS snapshot every 6 hours and use a secondLambda function to copy the snapshot into another Region
  • D. Create a cross-Region read replica for Amazon RDS in another Region and take an automated snapshot ofthe read replica

Answer: D

NEW QUESTION 5
A gaming company wants to deploy a game in multiple Regions. The company plans to save local high scores in Amazon DynamoDB tables in each Region. A Database Specialist needs to design a solution to automate the deployment of the database with identical configurations in additional Regions, as needed. The solution should also automate configuration changes across all Regions.
Which solution would meet these requirements and deploy the DynamoDB tables?

  • A. Create an AWS CLI command to deploy the DynamoDB table to all the Regions and save it for future deployments.
  • B. Create an AWS CloudFormation template and deploy the template to all the Regions.
  • C. Create an AWS CloudFormation template and use a stack set to deploy the template to all the Regions.
  • D. Create DynamoDB tables using the AWS Management Console in all the Regions and create a step-bystep guide for future deployments.

Answer: B

NEW QUESTION 6
A company has a production Amazon Aurora Db cluster that serves both online transaction processing (OLTP) transactions and compute-intensive reports. The reports run for 10% of the total cluster uptime while the OLTP transactions run all the time. The company has benchmarked its workload and determined that a six-node Aurora DB cluster is appropriate for the peak workload.
The company is now looking at cutting costs for this DB cluster, but needs to have a sufficient number of nodes in the cluster to support the workload at different times. The workload has not changed since the previous benchmarking exercise.
How can a Database Specialist address these requirements with minimal user involvement?

  • A. Split up the DB cluster into two different clusters: one for OLTP and the other for reportin
  • B. Monitor and set up replication between the two clusters to keep data consistent.
  • C. Review all evaluate the peak combined workloa
  • D. Ensure that utilization of the DB cluster node is at an acceptable leve
  • E. Adjust the number of instances, if necessary.
  • F. Use the stop cluster functionality to stop all the nodes of the DB cluster during times of minimal workloa
  • G. The cluster can be restarted again depending on the workload at the time.
  • H. Set up automatic scaling on the DB cluste
  • I. This will allow the number of reader nodes to adjust automatically to the reporting workload, when needed.

Answer: D

NEW QUESTION 7
A company is building a new web platform where user requests trigger an AWS Lambda function that performs an insert into an Amazon Aurora MySQL DB cluster. Initial tests with less than 10 users on the new platform yielded successful execution and fast response times. However, upon more extensive tests with the actual target of 3,000 concurrent users, Lambda functions are unable to connect to the DB cluster and receive too many connections errors.
Which of the following will resolve this issue?

  • A. Edit the my.cnf file for the DB cluster to increase max_connections
  • B. Increase the instance size of the DB cluster
  • C. Change the DB cluster to Multi-AZ
  • D. Increase the number of Aurora Replicas

Answer: B

NEW QUESTION 8
A gaming company has recently acquired a successful iOS game, which is particularly popular during theholiday season. The company has decided to add a leaderboard to the game that uses Amazon DynamoDB.The application load is expected to ramp up over the holiday season.
Which solution will meet these requirements at the lowest cost?

  • A. DynamoDB Streams
  • B. DynamoDB with DynamoDB Accelerator
  • C. DynamoDB with on-demand capacity mode
  • D. DynamoDB with provisioned capacity mode with Auto Scaling

Answer: C

NEW QUESTION 9
A Database Specialist is working with a company to launch a new website built on Amazon Aurora with several Aurora Replicas. This new website will replace an on-premises website connected to a legacy relational database. Due to stability issues in the legacy database, the company would like to test the resiliency of Aurora.
Which action can the Database Specialist take to test the resiliency of the Aurora DB cluster?

  • A. Stop the DB cluster and analyze how the website responds
  • B. Use Aurora fault injection to crash the master DB instance
  • C. Remove the DB cluster endpoint to simulate a master DB instance failure
  • D. Use Aurora Backtrack to crash the DB cluster

Answer: B

NEW QUESTION 10
A Database Specialist has migrated an on-premises Oracle database to Amazon Aurora PostgreSQL. The schema and the data have been migrated successfully. The on-premises database server was also being used to run database maintenance cron jobs written in Python to perform tasks including data purging and generating data exports. The logs for these jobs show that, most of the time, the jobs completed within 5 minutes, but a few jobs took up to 10 minutes to complete. These maintenance jobs need to be set up for Aurora PostgreSQL.
How can the Database Specialist schedule these jobs so the setup requires minimal maintenance and provides high availability?

  • A. Create cron jobs on an Amazon EC2 instance to run the maintenance jobs following the required schedule.
  • B. Connect to the Aurora host and create cron jobs to run the maintenance jobs following the requiredschedule.
  • C. Create AWS Lambda functions to run the maintenance jobs and schedule them with Amazon CloudWatchEvents.
  • D. Create the maintenance job using the Amazon CloudWatch job scheduling plugin.

Answer: D

NEW QUESTION 11
A marketing company is using Amazon DocumentDB and requires that database audit logs be enabled. A Database Specialist needs to configure monitoring so that all data definition language (DDL) statements performed are visible to the Administrator. The Database Specialist has set the audit_logs parameter to enabled in the cluster parameter group.
What should the Database Specialist do to automatically collect the database logs for the Administrator?

  • A. Enable DocumentDB to export the logs to Amazon CloudWatch Logs
  • B. Enable DocumentDB to export the logs to AWS CloudTrail
  • C. Enable DocumentDB Events to export the logs to Amazon CloudWatch Logs
  • D. Configure an AWS Lambda function to download the logs using the download-db-log-file-portion operationand store the logs in Amazon S3

Answer: A

NEW QUESTION 12
A company is using Amazon RDS for MySQL to redesign its business application. A Database Specialist has noticed that the Development team is restoring their MySQL database multiple times a day when Developers make mistakes in their schema updates. The Developers sometimes need to wait hours to the restores to complete.
Multiple team members are working on the project, making it difficult to find the correct restore point for each mistake.
Which approach should the Database Specialist take to reduce downtime?

  • A. Deploy multiple read replicas and have the team members make changes to separate replica instances
  • B. Migrate to Amazon RDS for SQL Server, take a snapshot, and restore from the snapshot
  • C. Migrate to Amazon Aurora MySQL and enable the Aurora Backtrack feature
  • D. Enable the Amazon RDS for MySQL Backtrack feature

Answer: A

NEW QUESTION 13
A Database Specialist needs to speed up any failover that might occur on an Amazon Aurora PostgreSQL DB cluster. The Aurora DB cluster currently includes the primary instance and three Aurora Replicas.
How can the Database Specialist ensure that failovers occur with the least amount of downtime for the application?

  • A. Set the TCP keepalive parameters low
  • B. Call the AWS CLI failover-db-cluster command
  • C. Enable Enhanced Monitoring on the DB cluster
  • D. Start a database activity stream on the DB cluster

Answer: B

NEW QUESTION 14
After restoring an Amazon RDS snapshot from 3 days ago, a company’s Development team cannot connect tothe restored RDS DB instance. What is the likely cause of this problem?

  • A. The restored DB instance does not have Enhanced Monitoring enabled
  • B. The production DB instance is using a custom parameter group
  • C. The restored DB instance is using the default security group
  • D. The production DB instance is using a custom option group

Answer: B

NEW QUESTION 15
A company’s Security department established new requirements that state internal users must connect to an existing Amazon RDS for SQL Server DB instance using their corporate Active Directory (AD) credentials. A Database Specialist must make the modifications needed to fulfill this requirement.
Which combination of actions should the Database Specialist take? (Choose three.)

  • A. Disable Transparent Data Encryption (TDE) on the RDS SQL Server DB instance.
  • B. Modify the RDS SQL Server DB instance to use the directory for Windows authentication.Createappropriate new logins.
  • C. Use the AWS Management Console to create an AWS Managed Microsoft A
  • D. Create a trust relationshipwith the corporate AD.
  • E. Stop the RDS SQL Server DB instance, modify it to use the directory for Windows authentication, and startit agai
  • F. Create appropriate new logins.
  • G. Use the AWS Management Console to create an AD Connecto
  • H. Create a trust relationship withthecorporate AD.
  • I. Configure the AWS Managed Microsoft AD domain controller Security Group.

Answer: CDF

NEW QUESTION 16
An online gaming company is planning to launch a new game with Amazon DynamoDB as its data store. The database should be designated to support the following use cases:
DBS-C01 dumps exhibit Update scores in real time whenever a player is playing the game.
DBS-C01 dumps exhibit Retrieve a player’s score details for a specific game session.
A Database Specialist decides to implement a DynamoDB table. Each player has a unique user_id and each game has a unique game_id.
Which choice of keys is recommended for the DynamoDB table?

  • A. Create a global secondary index with game_id as the partition key
  • B. Create a global secondary index with user_id as the partition key
  • C. Create a composite primary key with game_id as the partition key and user_id as the sort key
  • D. Create a composite primary key with user_id as the partition key and game_id as the sort key

Answer: B

NEW QUESTION 17
A global digital advertising company captures browsing metadata to contextually display relevant images,pages, and links to targeted users. A single page load can generate multiple events that need to be storedindividually. The maximum size of an event is 200 KB and the average size is 10 KB. Each page load mustquery the user’s browsing history to provide targeting recommendations. The advertising company expectsover 1 billion page visits per day from users in the United States, Europe, Hong Kong, and India. The structureof the metadata varies depending on the event. Additionally, the browsing metadata must be written and readwith very low latency to ensure a good viewing experience for the users.
Which database solution meets these requirements?

  • A. Amazon DocumentDB
  • B. Amazon RDS Multi-AZ deployment
  • C. Amazon DynamoDB global table
  • D. Amazon Aurora Global Database

Answer: C

NEW QUESTION 18
A company needs a data warehouse solution that keeps data in a consistent, highly structured format. The company requires fast responses for end-user queries when looking at data from the current year, and users must have access to the full 15-year dataset, when needed. This solution also needs to handle a fluctuating number incoming queries. Storage costs for the 100 TB of data must be kept low. Which solution meets these requirements?

  • A. Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storag
  • B. Provision enough instances to support high demand.
  • C. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent dat
  • D. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum laye
  • E. Provision enough instances to support high demand.
  • F. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent dat
  • G. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum laye
  • H. Enable Amazon Redshift Concurrency Scaling.
  • I. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent dat
  • J. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum laye
  • K. Leverage Amazon Redshift elastic resize.

Answer: C

NEW QUESTION 19
A company is running an Amazon RDS for PostgeSQL DB instance and wants to migrate it to an Amazon Aurora PostgreSQL DB cluster. The current database is 1 TB in size. The migration needs to have minimal downtime.
What is the FASTEST way to accomplish this?

  • A. Create an Aurora PostgreSQL DB cluste
  • B. Set up replication from the source RDS for PostgreSQL DB instance using AWS DMS to the target DB cluster.
  • C. Use the pg_dump and pg_restore utilities to extract and restore the RDS for PostgreSQL DB instance to the Aurora PostgreSQL DB cluster.
  • D. Create a database snapshot of the RDS for PostgreSQL DB instance and use this snapshot to create the Aurora PostgreSQL DB cluster.
  • E. Migrate data from the RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster using an Aurora Replic
  • F. Promote the replica during the cutover.

Answer: C

NEW QUESTION 20
A financial company wants to store sensitive user data in an Amazon Aurora PostgreSQL DB cluster. The database will be accessed by multiple applications across the company. The company has mandated that all communications to the database be encrypted and the server identity must be validated. Any non-SSL-based connections should be disallowed access to the database.
Which solution addresses these requirements?

  • A. Set the rds.force_ssl=0 parameter in DB parameter group
  • B. Download and use the Amazon RDS certificatebundle and configure the PostgreSQL connection string with sslmode=allow.
  • C. Set the rds.force_ssl=1 parameter in DB parameter group
  • D. Download and use the Amazon RDS certificatebundle and configure the PostgreSQL connection string with sslmode=disable.
  • E. Set the rds.force_ssl=0 parameter in DB parameter group
  • F. Download and use the Amazon RDS certificatebundle and configure the PostgreSQL connection string with sslmode=verify-ca.
  • G. Set the rds.force_ssl=1 parameter in DB parameter group
  • H. Download and use the Amazon RDS certificatebundle and configure the PostgreSQL connection string with sslmode=verify-full.

Answer: D

NEW QUESTION 21
A company is running a two-tier ecommerce application in one AWS account. The web server is deployed using an Amazon RDS for MySQL Multi-AZ DB instance. A Developer mistakenly deleted the database in the production environment. The database has been restored, but this resulted in hours of downtime and lost revenue.
Which combination of changes in existing IAM policies should a Database Specialist make to prevent an error like this from happening in the future? (Choose three.)

  • A. Grant least privilege to groups, users, and roles
  • B. Allow all users to restore a database from a backup that will reduce the overall downtime to restore thedatabase
  • C. Enable multi-factor authentication for sensitive operations to access sensitive resources and APIoperations
  • D. Use policy conditions to restrict access to selective IP addresses
  • E. Use AccessList Controls policy type to restrict users for database instance deletion
  • F. Enable AWS CloudTrail logging and Enhanced Monitoring

Answer: ACD

NEW QUESTION 22
A Database Specialist is migrating an on-premises Microsoft SQL Server application database to Amazon RDS for PostgreSQL using AWS DMS. The application requires minimal downtime when the RDS DB instance goes live.
What change should the Database Specialist make to enable the migration?

  • A. Configure the on-premises application database to act as a source for an AWS DMS full load with ongoing change data capture (CDC)
  • B. Configure the AWS DMS replication instance to allow both full load and ongoing change data capture(CDC)
  • C. Configure the AWS DMS task to generate full logs to allow for ongoing change data capture (CDC)
  • D. Configure the AWS DMS connections to allow two-way communication to allow for ongoing change datacapture (CDC)

Answer: A

NEW QUESTION 23
A company has a database monitoring solution that uses Amazon CloudWatch for its Amazon RDS for SQL Server environment. The cause of a recent spike in CPU utilization was not determined using the standard metrics that were collected. The CPU spike caused the application to perform poorly, impacting users. A Database Specialist needs to determine what caused the CPU spike.
Which combination of steps should be taken to provide more visibility into the processes and queries running during an increase in CPU load? (Choose two.)

  • A. Enable Amazon CloudWatch Events and view the incoming T-SQL statements causing the CPU to spike.
  • B. Enable Enhanced Monitoring metrics to view CPU utilization at the RDS SQL Server DB instance level.
  • C. Implement a caching layer to help with repeated queries on the RDS SQL Server DB instance.
  • D. Use Amazon QuickSight to view the SQL statement being run.
  • E. Enable Amazon RDS Performance Insights to view the database load and filter the load by waits, SQLstatements, hosts, or users.

Answer: BE

NEW QUESTION 24
A retail company is about to migrate its online and mobile store to AWS. The company’s CEO has strategic plans to grow the brand globally. A Database Specialist has been challenged to provide predictable read and write database performance with minimal operational overhead.
What should the Database Specialist do to meet these requirements?

  • A. Use Amazon DynamoDB global tables to synchronize transactions
  • B. Use Amazon EMR to copy the orders table data across Regions
  • C. Use Amazon Aurora Global Database to synchronize all transactions
  • D. Use Amazon DynamoDB Streams to replicate all DynamoDB transactions and sync them

Answer: A

NEW QUESTION 25
A large company is using an Amazon RDS for Oracle Multi-AZ DB instance with a Java application. As a part of its disaster recovery annual testing, the company would like to simulate an Availability Zone failure and record how the application reacts during the DB instance failover activity. The company does not want to make any code changes for this activity.
What should the company do to achieve this in the shortest amount of time?

  • A. Use a blue-green deployment with a complete application-level failover test
  • B. Use the RDS console to reboot the DB instance by choosing the option to reboot with failover
  • C. Use RDS fault injection queries to simulate the primary node failure
  • D. Add a rule to the NACL to deny all traffic on the subnets associated with a single Availability Zone

Answer: C

NEW QUESTION 26
......

P.S. 2passeasy now are offering 100% pass ensure DBS-C01 dumps! All DBS-C01 exam questions have been updated with correct answers: https://www.2passeasy.com/dumps/DBS-C01/ (85 New Questions)