aiotestking uk

DOP-C01 Exam Questions - Online Test


DOP-C01 Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

Act now and download your Amazon-Web-Services DOP-C01 test today! Do not waste time for the worthless Amazon-Web-Services DOP-C01 tutorials. Download Update Amazon-Web-Services AWS Certified DevOps Engineer- Professional exam with real questions and answers and begin to learn Amazon-Web-Services DOP-C01 with a classic professional.

Free DOP-C01 Demo Online For Amazon-Web-Services Certifitcation:

NEW QUESTION 1
Your application stores sensitive information on an EBS volume attached to your EC2 instance. How can you protect your information? Choose two answers from the options given below

  • A. Unmount the EBS volume, take a snapshot and encrypt the snapsho
  • B. Re-mount the Amazon EBS volume
  • C. It is not possible to encrypt an EBS volume, you must use a lifecycle policy to transfer data to S3 forencryption.
  • D. Copy the unencrypted snapshot and check the box to encrypt the new snapsho
  • E. Volumes restored from this encrypted snapshot will also be encrypted.
  • F. Create and mount a new, encrypted Amazon EBS volum
  • G. Move the data to the new volum
  • H. Delete the old Amazon EBS volume *t

Answer: CD

Explanation:
These steps are given in the AWS documentation
To migrate data between encrypted and unencrypted volumes
1) Create your destination volume (encrypted or unencrypted, depending on your need).
2) Attach the destination volume to the instance that hosts the data to migrate.
3) Make the destination volume available by following the procedures in Making an Amazon EBS Volume Available for Use. For Linux instances, you can create a mount point at /mnt/destination and mount the destination volume there.
4) Copy the data from your source directory to the destination volume. It may be most convenient to use a bulk-copy utility for this.
To encrypt a volume's data by means of snapshot copying
1) Create a snapshot of your unencrypted CBS volume. This snapshot is also unencrypted.
2) Copy the snapshot while applying encryption parameters. The resulting target snapshot is encrypted.
3) Restore the encrypted snapshot to a new volume, which is also encrypted.
For more information on EBS Encryption, please refer to the below document link: from AWS http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html

NEW QUESTION 2
Which of the following is not a rolling type update which is present for Configuration Updates when it comes to the Elastic Beanstalk service

  • A. Rolling based on Health
  • B. Rolling based on Instances
  • C. Immutable
  • D. Rolling based on time

Answer: B

Explanation:
When you go to the configuration of your Elastic Beanstalk environment, below are the updates that are possible
DOP-C01 dumps exhibit
The AWS Documentation mentions
1) With health-based rolling updates. Elastic Beanstalk waits until instances in a batch pass health checks before moving on to the next batch.
2) For time-based rolling updates, you can configure the amount of time that Elastic Beanstalk waits after completing the launch of a batch of instances before moving on to the next batch. This pause time allows your application to bootsrap and start serving requests.
3) Immutable environment updates are an alternative to rolling updates that ensure that configuration changes that require replacing instances are applied efficiently and safely. If an immutable environment update fails, the rollback process requires only terminating an Auto Scalinggroup. A failed rolling update, on the other hand, requires performing an additional rolling update to roll back the changes.
For more information on Rolling updates for Elastic beanstalk configuration updates, please visit the below URL:
• http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.ro11ingupdates.html

NEW QUESTION 3
Which of the following is the right sequence of initial steps in the deployment of application revisions using Code Deploy
1) Specify deployment configuration
2) Upload revision
3) Create application
4) Specify deployment group

  • A. 3, 2, 1 and 4
  • B. 3,1,2 and 4
  • C. 3,4,1 and 2
  • D. 3,4,2 and 1

Answer: C

Explanation:
The below diagram from the AWS documentation shows the deployment steps
DOP-C01 dumps exhibit
For more information on the deployment steps please refer to the below link:
• http://docs.aws.amazon.com/codedeploy/latest/userguide/de ployment-steps.html

NEW QUESTION 4
Which of the following Cloudformation helper scripts can help install packages on EC2 resources

  • A. cfn-init
  • B. cfn-signal
  • C. cfn-get-metadata
  • D. cfn-hup

Answer: A

Explanation:
The AWS Documentation mentions
Currently, AWS CloudFormation provides the following helpers:
cf n-init: Used to retrieve and interpret the resource metadata, installing packages, creating files and starting services.
cf n-signal: A simple wrapper to signal an AWS CloudFormation CreationPolicy or WaitCondition,
enabling you to synchronize other resources in the stack with the application being ready.
cf n-get-metadata: A wrapper script making it easy to retrieve either all metadata defined for a resource or path to a specific key or subtree of the resource metadata.
cf n-hup: A daemon to check for updates to metadata and execute custom hooks when the changes are detected. For more information on helper scripts, please visit the below URL: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/cfn-helper-scripts-reference.htmI

NEW QUESTION 5
You are a Devops Engineer for your company. Your company is using Opswork stack to rollout a collection of web instances. When the instances are launched, a configuration file need to be setup prior to the launching of the web application hosted on these instances. Which of the following steps would you carry out to ensure this requirement gets fulfilled. Choose 2 answers from the options given below

  • A. Ensure that the Opswork stack is changed to use the AWS specific cookbooks
  • B. Ensure that the Opswork stack is changed to use custom cookbooks
  • C. Configure a recipe which sets the configuration file and add it to the ConfigureLifeCycle Event of the specific web layer.
  • D. Configure a recipe which sets the configuration file and add it to the Deploy LifeCycleEvent of the specific web layer.

Answer: BC

Explanation:
This is mentioned in the AWS documentation Configure
This event occurs on all of the stack's instances when one of the following occurs:
• An instance enters or leaves the online state.
• You associate an Elastic IP address with an instance or disassociate one from an instance.
• You attach an Elastic Load Balancing load balancer to a layer, or detach one from a layer.
For example, suppose that your stack has instances A, B, and C, and you start a new instance, D. After D has finished running its setup recipes, AWS OpsWorks Stacks triggers the Configure event on A, B, C, and D. If you subsequently stop A, AWS Ops Works Stacks triggers the Configure event on B, C, and
D. AWS OpsWorks Stacks responds to the Configure event by running each layer's Configure recipes, which update the instances' configuration to reflect the current set of online instances. The Configure event is therefore a good time to regenerate configuration files. For example, the HAProxy Configure recipes reconfigure the load balancer to accommodate any changes in the set of online application server instances.
You can also manually trigger the Configure event by using the Configure stack command. For more information on Opswork lifecycle events, please refer to the below URL:
• http://docs.aws.a mazon.com/opsworks/latest/userguide/workingcookbook-events.htm I

NEW QUESTION 6
Your company has a set of resources hosted in AWS. Your IT Supervisor is concerned with the costs being incurred by the resources running in AWS and wants to optimize on the costs as much as possible. Which of the following ways could help achieve this efficiently? Choose 2 answers from the options given below.

  • A. Create Cloudwatch alarms to monitor underutilized resources and either shutdown or terminate resources which are not required.
  • B. Use the Trusted Advisor to see underutilized resources
  • C. Create a script which monitors all the running resources and calculates the costs accordingl
  • D. The analyze those resources accordingly and see which can be optimized.
  • E. Create Cloudwatch logs to monitor underutilized resources and either shutdown or terminate resources which are not required.

Answer: AB

Explanation:
You can use Cloudwatch alarms to see if resources are below a threshold for long periods of time. If so you can take the decision to either stop them or to terminate the resources.
For more information on Cloudwatch alarms, please visit the below URL:
• <http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Ala rmThatSendsCmail.html>
In the Trusted Advisor, when you enable the Cost optimization section, you will get all sorts of checks which can be used to optimize the costs of your AWS resources.
DOP-C01 dumps exhibit
For more information on the Trusted Advisor, please visit the below U RL:
• https://aws.amazon.com/premiumsupport/trustedadvisor/

NEW QUESTION 7
You are working as an AWS Devops admins for your company. You are in-charge of building the infrastructure for the company's development teams using Cloudformation. The template will include building the VPC and networking components, installing a LAMP stack and securing the created resources. As per the AWS best practices what is the best way to design this template

  • A. Create a single cloudformation template to create all the resources since it would be easierfrom the maintenance perspective.
  • B. Create multiple cloudformation templates based on the number of VPC's in the environment.
  • C. Create multiple cloudformation templates based on the number of development groups in the environment.
  • D. Create multiple cloudformation templates for each set of logical resources, one for networking, the otherfor LAMP stack creation.

Answer: D

Explanation:
Creating multiple cloudformation templates is an example of using nested stacks. The advantage of using nested stacks is given below as per the AWS documentation
As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single,
unified stack. Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS::CloudFormation::Stackresource in your template to reference
other templates.
For more information on Cloudformation best practices, please refer to the below link: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/best-practices.html

NEW QUESTION 8
Your company uses an application hosted in AWS which conists of EC2 Instances. The logs of the EC2 instances need to be processed and analyzed in real time, since this is a requirement from the IT Security department. Which of the following can be used to process the logs in real time.

  • A. UseCloudwatch logs to process and analyze the logs in real time
  • B. UseAmazon Glacier to store the logs and then use Amazon Kinesis to process andanalyze the logs in real time
  • C. UseAmazon S3 to store the logs and then use Amazon Kinesis to process and analyzethe logs inreal time
  • D. Useanother EC2 Instance with a larger instance type to process the logs

Answer: C

Explanation:
The AWS Documentation mentions the below Real-time metrics and reporting
You can use data collected into Kinesis Streams for simple data analysis and reporting in real time. For example, your data-processing application can work on metrics and reporting for system and application logs as the data is streaming in, rather than wait to receive batches of data.
Real-time data analytics
This combines the power of parallel processing with the value of real-time data. For example, process website clickstreams in real time, and then analyze site usability engagement using multiple different Kinesis Streams applications running in parallel.
Amazon Glacier is meant for Archival purposes and should not be used for storing the logs for real time processing.
For more information on Amazon Kinesis, please refer to the below link:
• http://docs.aws.amazon.com/streams/latest/dev/introduction.html

NEW QUESTION 9
You have deployed an application to AWS which makes use of Autoscaling to launch new instances. You now want to change the instance type for the new instances. Which of the following is one of the action items to achieve this deployment?

  • A. Use Elastic Beanstalk to deploy the new application with the new instance type
  • B. Use Cloudformation to deploy the new application with the new instance type
  • C. Create a new launch configuration with the new instance type
  • D. Create new EC2 instances with the new instance type and attach it to the Autoscaling Group

Answer: C

Explanation:
The ideal way is to create a new launch configuration, attach it to the existing Auto Scaling group, and terminate the running instances.
Option A is invalid because Clastic beanstalk cannot launch new instances on demand. Since the current scenario requires Autoscaling, this is not the ideal option
Option B is invalid because this will be a maintenance overhead, since you just have an Autoscaling Group. There is no need to create a whole Cloudformation
template for this.
Option D is invalid because Autoscaling Group will still launch CC2 instances with the older launch configuration
For more information on Autoscaling Launch configuration, please refer to the below document link: from AWS
http://docs.aws.amazon.com/autoscaling/latest/userguide/l_aunchConfiguration.html

NEW QUESTION 10
As an architect you have decided to use CloudFormation instead of OpsWorks or Elastic Beanstalk for deploying the applications in your company. Unfortunately, you have discovered that there is a
resource type that is not supported by CloudFormation. What can you do to get around this.

  • A. Specify more mappings and separate your template into multiple templates by using nested stacks.
  • B. Create a custom resource type using template developer, custom resource template, and CloudFormatio
  • C. */
  • D. Specify the custom resource by separating your template into multiple templates by using nested stacks.
  • E. Use a configuration management tool such as Chef, Puppet, or Ansible.

Answer: B

Explanation:
Custom resources enable you to write custom provisioning logic in templates that AWS Cloud Formation runs anytime you create, update (if you changed the custom resource), or delete stacks. For example, you might want to include resources that aren't available as AWS Cloud Formation resource types. You can include those resources by using custom resources. That way you can still manage all your related resources in a single stack.
For more information on custom resources in Cloudformation please visit the below URL:
◆ http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/template-custom- resources.htm I

NEW QUESTION 11
As part of your deployment pipeline, you want to enable automated testing of your AWS CloudFormation template. What testing should be performed to enable faster feedback while minimizing costs and risk? Select three answers from the options given below

  • A. Usethe AWS CloudFormation Validate Template to validate the syntax of the template
  • B. Usethe AWS CloudFormation Validate Template to validate the properties ofresources defined in the template.
  • C. Validatethe template's is syntax using a generalJSON parser.
  • D. Validatethe AWS CloudFormation template against the official XSD scheme definitionpublished by Amazon Web Services.
  • E. Updatethe stack with the templat
  • F. If the template fails rollback will return thestack and its resources to exactly the same state.
  • G. When creating the stack, specify an Amazon SNS topic to which your testing system is subscribe
  • H. Your testing system runs tests when it receives notification that the stack is created or updated.

Answer: AEF

Explanation:
The AWS documentation mentions the following
The aws cloudformation validate-template command is designed to check only the syntax of your template. It does not ensure that the property values that you have specified for a resource are valid for that resource. Nor does it determine the number of resources that will exist when the stack is created.
To check the operational validity, you need to attempt to create the stack. There is no sandbox or test area for AWS Cloud Formation stacks, so you are charged for the resources you create during testing. Option F is needed for notification.
For more information on Cloudformation template validation, please visit the link:
http://docs.aws.a mazon.com/AWSCIoudFormation/latest/UserGuide/using-cfn-va I idate- template.htm I

NEW QUESTION 12
You work as a Devops Engineer for your company. There are currently a number of environments hosted via Elastic beanstalk. There is a requirement to ensure to ensure that the rollback time for a new version application deployment is kept to a minimal. Which elastic beanstalk deployment method would fulfil this requirement ?

  • A. Rollingwith additional batch
  • B. AllatOnce
  • C. Blue/Green
  • D. Rolling

Answer: C

Explanation:
The below table from the AWS documentation shows that the least amount of time is spent in rollbacks when it comes to Blue Green deployments. This is because the only thing that needs to be done is for URL's to be swapped.
DOP-C01 dumps exhibit
For more information on Elastic beanstalk deployment strategies, please visit the below URL: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version, htmI

NEW QUESTION 13
You are working with a customer who is using Chef Configuration management in their data center. Which service is designed to let the customer leverage existing Chef recipes in AWS?

  • A. AmazonSimple Workflow Service
  • B. AWSEIastic Beanstalk
  • C. AWSCIoudFormation
  • D. AWSOpsWorks

Answer: D

Explanation:
AWS OpsWorks is a configuration management service that helps you configure and operate applications of all shapes and sizes using Chef. You can define the application's architecture and the specification of each component including package installation, software configuration and resources
such as storage. Start from templates for common technologies like application servers and databases or build your own to perform any task that can be scripted. AWS OpsWorks includes automation to scale your application based on time or load and dynamic configuration to orchestrate changes as your environment scales.
For more information on Opswork, please visit the link:
• https://aws.amazon.com/opsworks/

NEW QUESTION 14
Your company is concerned with EBS volume backup on Amazon EC2 and wants to ensure they have proper backups and that the data is durable. What solution would you implement and why? Choose the correct answer from the options below

  • A. ConfigureAmazon Storage Gateway with EBS volumes as the data source and store thebackups on premise through the storage gateway
  • B. Writea cronjob on the server that compresses the data that needs to be backed upusing gzip compression, then use AWS CLI to copy the data into an S3 bucket for durability
  • C. Usea lifecycle policy to back up EBS volumes stored on Amazon S3 for durability
  • D. Writea cronjob that uses the AWS CLI to take a snapshot of production EBS volume
  • E. The data is durable because EBS snapshots are stored on the Amazon S3 standard storage class

Answer: D

Explanation:
You can take snapshots of CBS volumes and to automate the process you can use the CLI. The snapshots are automatically stored on S3 for durability.
For more information on CBS snapshots, please refer to the below link: http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/CBSSnapshots.html

NEW QUESTION 15
You are building a mobile app for consumers to post cat pictures online. You will be storing the images in AWS S3. You want to run the system very cheaply and simply. Which one of these options allows you to build a photo sharing application with the right authentication/authorization implementation.

  • A. Build the application out using AWS Cognito and web identity federation to allow users to log in using Facebook or Google Account
  • B. Once they are logged in, the secret token passed to that user is used to directly access resources on AWS, like AWS S3. ^/
  • C. Use JWT or SAML compliant systems to build authorization policie
  • D. Users log in with a username and password, and are given a token they can use indefinitely to make calls against the photo infrastructure.C Use AWS API Gateway with a constantly rotating API Key to allow access from the client-sid
  • E. Construct a custom build of the SDK and include S3 access in it.
  • F. Create an AWS oAuth Service Domain ad grant public signup and access to the domai
  • G. During setup, add at least one major social media site as a trusted Identity Provider for users.

Answer: A

Explanation:
Amazon Cognito lets you easily add user sign-up and sign-in and manage permissions for your mobile and web apps. You can create your own user directory within Amazon Cognito. You can also choose to authenticate users through social identity providers such as Facebook, Twitter, or Amazon; with SAML identity solutions; or by using your own identity system. In addition, Amazon Cognito enables you to save data locally on users' devices, allowing your applications to work even when the devices are offline. You can then synchronize data across users' devices so that their app experience remains consistent regardless of the device they use.
For more information on AWS Cognito, please visit the below URL:
• http://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html

NEW QUESTION 16
The company you work for has a huge amount of infrastructure built on AWS. However there has been some concerns recently about the security of this infrastructure, and an external auditor has been given the task of running a thorough check of all of your company's AWS assets. The auditor will be in the USA while your company's infrastructure resides in the Asia Pacific (Sydney) region on AWS. Initially, he needs to check all of your VPC assets, specifically, security groups and NACLs You have been assigned the task of providing the auditor with a login to be able to do this. Which of the following would be the best and most secure solution to provide the auditor with so he can begin his initial investigations? Choose the correct answer from the options below

  • A. Createan 1AM usertied to an administrator rol
  • B. Also provide an additional level ofsecurity with MFA.
  • C. Givehim root access to your AWS Infrastructure, because he is an auditor he willneed access to every service.
  • D. Createan 1AM user who will have read-only access to your AWS VPC infrastructure andprovide the auditor with those credentials.
  • E. Createan 1AM user with full VPC access but set a condition that will not allow him tomodify anything if the request is from any IP other than his own.

Answer: C

Explanation:
Generally you should refrain from giving high level permissions and give only the required permissions. In this case option C fits well by just providing the relevant access which is required.
For more information on 1AM please see the below link:
• https://aws.amazon.com/iam/

NEW QUESTION 17
Your company is getting ready to do a major public announcement of a social media site on AWS. The website is running on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these requirements? Choose 2 answers from the options below

  • A. DeployElasticCache in-memory cache running in each availability zone
  • B. Implementshardingto distribute load to multiple RDS MySQL instances
  • C. Increasethe RDS MySQL Instance size and Implement provisioned IOPS
  • D. Addan RDS MySQL read replica in each availability zone

Answer: AD

Explanation:
Implement Read Replicas and Elastic Cache
Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This replication feature makes it easy to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput.
For more information on Read Replica's, please visit the below link
• https://aws.amazon.com/rds/details/read-replicas/
Amazon OastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases.
For more information on Amazon OastiCache, please visit the below link
• https://aws.amazon.com/elasticache/

NEW QUESTION 18
You need to monitor specific metrics from your application and send real-time alerts to your Devops Engineer. Which of the below services will fulfil this requirement? Choose two answers

  • A. Amazon CloudWatch
  • B. Amazon Simple Notification Service
  • C. Amazon Simple Queue Service
  • D. Amazon Simple Email Service

Answer: AB

Explanation:
Amazon Cloud Watch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use Cloud Watch to collect and track metrics, which are variables you can measure for your resources and applications. Cloud Watch alarms send notifications or automatically make changes to the resources you are monitoring based on rules that you define.
For more information on AWS Cloudwatch, please refer to the below document link: from AWS
• http://docs.aws.a mazon.com/AmazonCloudWatch/latest/monitoring/WhatlsCloud Watch.htm I Amazon Cloud Watch uses Amazon SNS to send email. First, create and subscribe to an SNS topic.
When you create a CloudWatch alarm, you can add this SNS topic to send an email notification when the alarm changes state
For more information on AWS Cloudwatch and SNS, please refer to the below document link: from AWS
http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/US_SetupSNS.html

NEW QUESTION 19
You have an Auto Scaling group with 2 AZs. One AZ has 4 EC2 instances and the other has 3 EC2 instances. None of the instances are protected from scale in. Based on the default Auto Scaling termination policy what will happen?

  • A. Auto Scaling selects an instance to terminate randomly
  • B. Auto Scaling will terminate unprotected instances in the Availability Zone with the oldest launch configuration.
  • C. Auto Scaling terminates which unprotected instances are closest to the next billing hour.
  • D. Auto Scaling will select the AZ with 4 EC2 instances and terminate an instance.

Answer: D

Explanation:
The default termination policy is designed to help ensure that your network architecture spans Availability Zones evenly. When using the default termination policy.
Auto Scaling selects an instance to terminate as follows:
Auto Scaling determines whether there are instances in multiple Availability Zones. If so, it selects the Availability Zone with the most instances and at least one instance that is not protected from scale in. If there is more than one Availability Zone with this number of instances. Auto Scaling selects the Availability Zone with the instances that use the oldest launch configuration. For more information on Autoscaling instance termination please refer to the below link: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html

NEW QUESTION 20
Your company has the requirement to set up instances running as part of an Autoscaling Group. Part of the requirement is to use Lifecycle hooks to setup custom based software's and do the necessary configuration on the instances. The time required for this setup might take an hour, or might finish before the hour is up. How should you setup lifecycle hooks for the Autoscaling Group. Choose 2 ideal actions you would include as part of the lifecycle hook.

  • A. Configure the lifecycle hook to record heartbeat
  • B. If the hour is up, restart the timeout period.
  • C. Configure the lifecycle hook to record heartbeat
  • D. If the hour is up, choose to terminate the current instance and start a new one
  • E. Ifthe software installation and configuration is complete, then restart the time period.
  • F. If the software installation and configuration is complete, then send a signal to complete the launch of the instance.

Answer: AD

Explanation:
The AWS Documentation provides the following information on lifecycle hooks
By default, the instance remains in a wait state for one hour, and then Auto Scaling continues the launch or terminate process (Pending: Proceed or Terminating: Proceed). If you need more time, you can restart the timeout period by recording a heartbeat. If you finish before the timeout period ends, you can complete the lifecycle action, which continues the launch or termination process
For more information on AWS Lifecycle hooks, please visit the below URL:
• http://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.html

NEW QUESTION 21
You work for an accounting firm and need to store important financial data for clients. Initial frequent access to data is required, but after a period of 2 months, the data can be archived and brought back only in the case of an audit. What is the most cost-effective way to do this?

  • A. Storeall data in a Glacier
  • B. Storeall data in a private S3 bucket
  • C. Uselifecycle management to store all data in Glacier
  • D. Uselifecycle management to move data from S3 to Glacier

Answer: D

Explanation:
The AWS Documentation mentions the following
Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows:
Transition actions - In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARDJ A (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation.
Cxpiration actions - In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf. For more information on S3 Lifecycle policies, please visit the below URL:
• http://docs.aws.a mazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.htmI

NEW QUESTION 22
You need your API backed by DynamoDB to stay online duringa total regional AWS failure. You can tolerate a couple minutes of lag or slowness during a large failure event, but the system should recover with normal operation after those few minutes. What is a good approach?

  • A. Set up DynamoDB cross-region replication in a master-standby configuration, with a single standby in another regio
  • B. Create an Auto Scaling Group behind an ELB in each of the two regions for your application layer in which DynamoDB is running i
  • C. Add a Route53 Latency DNS Record with DNS Failover, using the ELBs in the two regions as the resource records.
  • D. Set up a DynamoDB Global tabl
  • E. Create an Auto Scaling Group behind an ELB in each of the two regions for your application layer in which the DynamoDB is running i
  • F. Add a Route53 Latency DNS Record with DNS Failover, using the ELBs in the two regions as the resource records.
  • G. Set up a DynamoDB Multi-Region tabl
  • H. Create a cross-region ELB pointing to a cross-region Auto Scaling Group, and direct a Route53 Latency DNS Record with DNS Failover to the cross-region ELB.
  • I. Set up DynamoDB cross-region replication in a master-standby configuration, with a single standbyin another regio
  • J. Create a crossregion ELB pointing to a cross-region Auto Scaling Group, and direct a Route53 Latency DNS Record with DNS Failover to the cross- region ELB.

Answer: B

Explanation:
Updated based on latest AWS updates
Option A is invalid because using Latency based routing will sent traffic on the region with the standby instance. This is an active/passive replication and you can't write to the standby table unless there is a failover. Answer A can wort: only if you use a failover routing policy.
Option D is invalid because there is no concept of a cross region CLB.
Amazon DynamoDBglobal tables provide a fully managed solution for deploying a multi-region, multi-master database, without having to build and maintain your
own replication solution. When you create a global table, you specify the AWS regions where you want the table to be available. DynamoDB performs all of the
necessary tasks to create identical tables in these regions, and propagate ongoing data changes to all of them.
For more information on DynamoDB GlobalTables, please visit the below URL: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GlobalTables.html

NEW QUESTION 23
......

100% Valid and Newest Version DOP-C01 Questions & Answers shared by Certshared, Get Full Dumps HERE: https://www.certshared.com/exam/DOP-C01/ (New 116 Q&As)