aiotestking uk

DOP-C01 Exam Questions - Online Test


DOP-C01 Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

Certleader DOP-C01 Questions are updated and all DOP-C01 answers are verified by experts. Once you have completely prepared with our DOP-C01 exam prep kits you will be ready for the real DOP-C01 exam without a problem. We have Replace Amazon-Web-Services DOP-C01 dumps study guide. PASSED DOP-C01 First attempt! Here What I Did.

Also have DOP-C01 free dumps questions for you:

NEW QUESTION 1
You need to run a very large batch data processingjob one time per day. The source data exists
entirely in S3, and the output of the processingjob should also be written to S3 when finished. If you need to version control this processingjob and all setup and teardown logic for the system, what approach should you use?.

  • A. Model an AWSEMRjob in AWS Elastic Beanstalk.
  • B. Model an AWSEMRjob in AWS CloudFormation.
  • C. Model an AWS EMRjob in AWS OpsWorks.
  • D. Model an AWS EMRjob in AWS CLI Composer.

Answer: B

Explanation:
With AWS Cloud Formation, you can update the properties for resources in your existing stacks.
These changes can range from simple configuration changes, such
as updating the alarm threshold on a Cloud Watch alarm, to more complex changes, such as updating the Amazon Machine Image (AMI) running on an Amazon EC2
instance. Many of the AWS resources in a template can be updated, and we continue to add support for more.
For more information on Cloudformation version control, please visit the below URL: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/updating.stacks.wa I kthrough.htm I

NEW QUESTION 2
Your development team is using an Elastic beanstalk environment. After a week, the environment was torn down and a new one was created. When the development team tried to access the data on the older environment, it was not available. Why is this the case?

  • A. Thisis because the underlying EC2 Instances are created with encrypted storage andcannot be accessed once the environment has been terminated.
  • B. Thisis because the underlying EC2 Instances are created with IOPS volumes andcannot be accessed once the environment has been terminated.
  • C. Thisis because before the environment termination, Elastic beanstalk copies thedata to DynamoDB, and hence the data is not present in the EBS volumes
  • D. Thisis because the underlying EC2 Instances are created with no persistent localstorage

Answer: D

Explanation:
The AWS documentation mentions the following
Clastic Beanstalk applications run on Amazon CC2 instances that have no persistent local storage.
When the Amazon CC2 instances terminate, the local file system is not saved, and new Amazon CC2 instances start with a default file system. You should design your application to store data in a persistent data source.
For more information on Elastic beanstalk design concepts, please refer to the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.concepts.design.htmI

NEW QUESTION 3
You currently have an application deployed via Elastic Beanstalk. You are now deploying a new application and have ensured that Elastic beanstalk has detached the current instances and deployed and reattached new instances. But the new instances are still not receiving any sort of traffic. Why is this the case.

  • A. The instances are of the wrong AMI, hence they are not being detected by the ELB.
  • B. It takes time for the ELB to register the instances, hence there is a small timeframe before your instances can start receiving traffic
  • C. You need to create a new Elastic Beanstalk application, because you cannot detach and then reattach instances to an ELB within an Elastic Beanstalk application
  • D. The instances needed to be reattached before the new application version was deployed

Answer: B

Explanation:
Before the CC2 Instances can start receiving traffic, they will be checked via the health checks of the CLB. Once the health checks are successful, the CC2 Instance
will change its state to InService and then the EC2 Instances can start receiving traffic. For more information on ELB health checks, please refer to the below link: http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-healthchecks.html

NEW QUESTION 4
You are a Devops Engineer for your company. You are planning on using Cloudwatch for monitoring the resources hosted in AWS. Which of the following can you do with Cloudwatch logs ideally. Choose 3 answers from the options given below

  • A. StreamthelogdatatoAmazonKinesisforfurtherprocessing
  • B. Sendthe log data to AWS Lambda for custom processing
  • C. Streamthe log data into Amazon Elasticsearch for any search analysis required.
  • D. Sendthe data to SQS for further processing.

Answer: ABC

Explanation:
Amazon Kinesis can be used for rapid and continuous data intake and aggregation. The type of data used includes IT infrastructure log data, application logs, social media, market data feeds, and web clickstream data Amazon Lambda is a web service which can be used to do serverless computing of the logs which are published by Cloudwatch logs Amazon dasticsearch Service makes it easy to deploy, operate, and scale dasticsearch for log analytics, full text search, application monitoring, and more.
For more information on Cloudwatch logs please see the below link: http://docs.ws.amazon.com/AmazonCloudWatch/latest/logs/WhatlsCloudWatchLogs.html

NEW QUESTION 5
Your application uses Amazon SQS and Auto Scaling to process background jobs. The Auto Scaling policy is based on the number of messages in the queue, with a maximum instance count of 100. Since the application was launched, the group has never scaled above 50. The Auto scaling group has now scaled to 100, the queue size is increasing and very few jobs are being completed. The number of messages being sent to the queue is at normal levels. What should you do to identity why the queue size is unusually high and to reduce it?

  • A. Temporarily increase the AutoScaling group's desired value to 200. When the queue size has been reduced,reduce it to 50.
  • B. Analyzethe application logs to identify possible reasons for message processingfailure and resolve the cause for failure
  • C. V
  • D. Createadditional Auto Scalinggroups enabling the processing of the queue to beperformed in parallel.
  • E. AnalyzeCloudTrail logs for Amazon SQS to ensure that the instances Amazon EC2 role haspermission to receive messages from the queue.

Answer: B

Explanation:
Here the best option is to look at the application logs and resolve the failure. You could be having a functionality issue in the application that is causing the messages to queue up and increase the fleet of instances in the Autoscaling group.
For more information on centralized logging system implementation in AWS, please visit this link: https://aws.amazon.com/answers/logging/centralized-logging/

NEW QUESTION 6
Management has reported an increase in the monthly bill from Amazon Web Services, and they are extremely concerned with this increased cost. Management has asked you to determine the exact cause of this increase. After reviewing the billing report, you notice an increase in the data transfer cost. How can you provide management with a better insight into data transfer use?

  • A. Update your Amazon CloudWatch metrics to use five-second granularity, which will give better detailed metrics that can be combined with your billing data to pinpoint anomalies.
  • B. Use Amazon CloudWatch Logs to run a map-reduce on your logs to determine high usage and data transfer.
  • C. Deliver custom metrics to Amazon CloudWatch per application that breaks down application data transfer into multiple, more specific data points.D- Using Amazon CloudWatch metrics, pull your Elastic Load Balancing outbound data transfer metrics monthly, and include them with your billing report to show which application is causing higher bandwidth usage.

Answer: C

Explanation:
You can publish your own metrics to CloudWatch using the AWS CLI or an API. You can view statistical graphs of your published metrics with the AWS Management Console.
CloudWatch stores data about a metric as a series of data points. Each data point has an associated time stamp. You can even publish an aggregated set of data points called a statistic set.
If you have custom metrics specific to your application, you can give a breakdown to the management on the exact issue.
Option A won't be sufficient to provide better insights.
Option B is an overhead when you can make the application publish custom metrics Option D is invalid because just the ELB metrics will not give the entire picture
For more information on custom metrics, please refer to the below document link: from AWS http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.htmI

NEW QUESTION 7
An EC2 instance has failed a health check. What will the ELB do?

  • A. The ELB will terminate the instance
  • B. The ELB stops sending traffic to the instance that failed its health check
  • C. The ELB does nothing
  • D. The ELB will replace the instance

Answer: B

Explanation:
The AWS Documentation mentions
The load balancer routes requests only to the healthy instances. When the load balancer determines that an instance is unhealthy, it stops routing requests to that instance. The load balancer resumes routing requests to the instance when it has been restored to a healthy state.
For more information on ELB health checks, please refer to the below link: http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-healthchecks.html

NEW QUESTION 8
You have been tasked with deploying a scalable distributed system using AWS OpsWorks. Your distributed system is required to scale on demand. As it is distributed, each node must hold a configuration file that includes the hostnames of the other instances within the layer. How should you configure AWS OpsWorks to manage scaling this application dynamically?

  • A. Create a Chef Recipe to update this configuration file, configure your AWS OpsWorks stack to use custom cookbooks, and assign this recipe to the Configure LifeCycle Event of the specific layer.
  • B. Update this configuration file by writing a script to poll the AWS OpsWorks service API for new instance
  • C. Configure your base AMI to execute this script on Operating System startup.
  • D. Create a Chef Recipe to update this configuration file, configure your AWS OpsWorks stack to use custom cookbooks, and assign this recipe to execute when instances are launched.
  • E. Configure your AWS OpsWorks layer to use the AWS-provided recipe for distributed host configuration, and configure the instance hostname and file path parameters in your recipes settings.

Answer: A

Explanation:
Please check the following AWS DOCs which provides details on the scenario. Check the example of "configure".
◆ https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-events.html You can use the Configure Lifecycle event
This event occurs on all of the stack's instances when one of the following occurs:
• An instance enters or leaves the online state.
• You associate an Elastic IP address with an instance or disassociate one from an instance.
• You attach an Elastic Load Balancing load balancer to a layer, or detach one from a layer. Ensure the Opswork layer uses a custom Cookbook
DOP-C01 dumps exhibit
For more information on Opswork stacks, please refer to the below document link: from AWS
• http://docs.aws.a mazon.com/opsworks/latest/userguide/welcome_classic.html

NEW QUESTION 9
Your development team use .Net to code their web application. They want to deploy it to AWS for the purpose of continuous integration and deployment. The application code is hosted in a Git repository. Which of the following combination of steps can be used to fulfil this requirement. Choose 2 answers from the options given below

  • A. Use the Elastic beanstalk service to provision an IIS platform web environment to host the application.
  • B. Use the Code Pipeline service to provision an IIS environment to host the application.
  • C. Create a source bundle for the .Net code and upload it as an application revision.
  • D. Use a chef recipe to deploy the code and attach it to the Elastic beanstalk environment.

Answer: AC

Explanation:
When you provision an environment using the Clastic beanstalk service, you can choose the IIS platform to host the .Net based application as shown below.
DOP-C01 dumps exhibit
You can also upload the application as a zip file and specify it as an application revision.
For more information on Elastic beanstalk and .Net environments, please refer to the below link: http://docs^ws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_NCT.html

NEW QUESTION 10
You are using Elastic beanstalk to deploy an application that consists of a web and application server. There is a requirement to run some python scripts before the application version is deployed to the web server. Which of the following can be used to achieve this?

  • A. Makeuse of container commands
  • B. Makeuse of Docker containers
  • C. Makeuse of custom resources
  • D. Makeuse of multiple elastic beanstalk environments

Answer: A

Explanation:
The AWS Documentation mentions the following
You can use the container_commands key to execute commands that affect your application source code. Container commands run after the application and web
server have been set up and the application version archive has been extracted, but before the application version is deployed. Non-container commands and other
customization operations are performed prior to the application source code being extracted. For more information on Container commands, please visit the below URL: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.htmI

NEW QUESTION 11
You have defined a Linux based instance stack in Opswork. You now want to attach a database to the Opswork stack. Which of the below is an important step to ensure that the application on the Linux instances can communicate with the database

  • A. Addanother stack with the database layer and attach it to the application stack.
  • B. ConfigureSSL so that the instance can communicate with the database
  • C. Addthe appropriate driver packages to ensure the application can work with thedatabase
  • D. Configuredatabase tags for the Opswork application layerOpswork application layer

Answer: C

Explanation:
The AWS documentation mentions the below point Important
For Linux stacks, if you want to associate an Amazon RDS service layer with your app, you must add the appropriate driver package to the associated app server layer,
as follows:
1. Click Layers in the navigation pane and open the app server's Recipes tab.
2. Click Edit and add the appropriate driver package to OS Packages. For example, you should specify mysql if the layer contains Amazon Linux instances and mysql-client if the layer contains Ubuntu instances.
3. Save the changes and redeploy the app.
For more information on Opswork app connectivity, please visit the below URL: http://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-connectdb.htmI

NEW QUESTION 12
Which of the following items are required to allow an application deployed on an EC2 instance to write data to a DynamoDB table? Assume that no security keys are allowed to be stored on the EC2 instance. Choose 2 answers from the options below

  • A. CreateanlAM Role that allows write access to the DynamoDB table.
  • B. AddanlAMRoleto a running EC2 instance.
  • C. Createan 1AM Userthat allows write access to the DynamoDB table.
  • D. AddanlAMUserto a running EC2 instance.

Answer: AB

Explanation:
The AWS documentation mentions the following
We designed I AM roles so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that
the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using 1AM roles
For more information on 1AM Roles, please refer to the below URL: http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/iam-roles-for-amazon-ec2.htmI

NEW QUESTION 13
Your company is supporting a number of applications that need to be moved to AWS. Initially they thought of moving these applications to the Elastic beanstalk service. When going to the Elastic beanstalk service, you can see that the underlying platform service is not an option in the Elastic beanstalk service. Which of the following options can be used to port your application onto Elastic beanstalk

  • A. Usethe Opswork service to create a stac
  • B. In the stack, create a separate customlaye
  • C. Deploy the appl ication to this layer and then attach the layer toElastic beanstalk
  • D. Usecustom chef recipe's to deploy your application in Elastic beanstalk.
  • E. Usecustom Cloudformation templates to deploy the application into Elasticbeanstalk
  • F. Createa docker container for the custom application and then deploy it to Elasticbeanstalk.

Answer: D

Explanation:
The AWS documentation mentions the following
Clastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run.
For more information on Elastic beanstalk and Docker, please refer to the below link:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html

NEW QUESTION 14
You are using CloudFormation to launch an EC2 instance and then configure an application after the instance is launched. You need the stack creation of the ELB and Auto Scaling to wait until the EC2 instance is launched and configured properly. How do you do this?

  • A. It is not possible for the stack creation to wait until one service is created and launched
  • B. Use the WaitCondition resource to hold the creation of the other dependent resources
  • C. Use a CreationPolicy to wait for the creation of the other dependent resources >/
  • D. Use the HoldCondition resource to hold the creation of the other dependent resources

Answer: C

Explanation:
When you provision an Amazon EC2 instance in an AWS Cloud Formation stack, you might specify additional actions to configure the instance, such as install software packages or bootstrap applications. Normally, CloudFormation proceeds with stack creation after the instance has been successfully created. However, you can use a Creation Pol icy so that CloudFormation proceeds with stack creation only after your configuration actions are done. That way you'll know your applications are ready to go after stack creation succeeds.
A Creation Policy instructs CloudFormation to wait on an instance until CloudFormation receives the specified number of signals
Option A is invalid because this is possible
Option B is invalid because this is used make AWS CloudFormation pause the creation of a stack and wait for a signal before it continues to create the stack
For more information on this, please visit the below URL:
• https://aws.amazon.com/blogs/devops/use-a-creationpolicy-to-wait-for-on-instance- configurations/

NEW QUESTION 15
What would you set in your CloudFormation template to fire up different instance sizes based off of environment type? i.e. (If this is for prod, use m1.large instead of t1.micro)

  • A. Outputs
  • B. Resources
  • C. Mappings
  • D. conditions

Answer: D

Explanation:
The optional Conditions section includes statements that define when a resource is created or when a property is defined. For example, you can compare whether a value is equal to another value. Based on the result of that condition, you can conditionally create resources. If you have multiple conditions, separate them with commas.
For more information on Cloudformation conditions please visit the below link
http://docs^ws.a mazon.com/AWSCIoudFormation/latest/UserGuide/cond itions-section- structure.htm I

NEW QUESTION 16
In reviewing the Auto-Scaling events for your application you notice that your application is scaling up and down multiple times in the same hour. What design choice could you make to optimize for costs while preserving elasticity?
Choose 2 options from the choices given below

  • A. Modifythe Auto Scaling policy to use scheduled scaling actions
  • B. Modifythe Auto Scaling Group cool down timers
  • C. Modifythe Amazon Cloudwatch alarm period that triggers your AutoScaling scale downpolicy.
  • D. Modifythe Auto Scalinggroup termination policy to terminate the newest instancefirst.

Answer: BC

Explanation:
The Auto Scaling cooldown period is a configurable setting for your Auto Scalinggroup that helps to ensure that Auto Scaling doesn't launch or terminate additional instances before the previous scaling activity takes effect. After the Auto Scalinggroup dynamically scales using a simple scaling policy. Auto Scaling waits for the cooldown period to complete before resuming scaling activities. When you manually scale your Auto Scaling group, the default is not to wait for the cooldown period,
but you can override the default and honor the cooldown period. Note that if an instance becomes
unhealthy. Auto Scaling does not wait for the cooldown period to complete before replacing the unhealthy instance.
For more information on Autoscale cool down timers please visit the URL: http://docs.ws.amazon.com/autoscaling/latest/userguide/Cooldown.htm I
You can also modify the Cloudwatch triggers to ensure the thresholds are appropriate for the scale down policy For more information on Autoscaling user guide please visit the URL: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-scale-based-on-demand.html

NEW QUESTION 17
A gaming company adopted AWS Cloud Formation to automate load-testing of theirgames. They have created an AWS Cloud Formation template for each gaming environment and one for the load- testing stack. The load-testing stack creates an Amazon Relational Database Service (RDS) Postgres database and two web servers running on Amazon Elastic Compute Cloud (EC2) that send HTTP requests, measure response times, and write the results into the database. A test run usually takes between 15 and 30 minutes. Once the tests are done, the AWS Cloud Formation stacks are torn down immediately. The test results written to the Amazon RDS database must remain accessible for visualization and analysis.
Select possible solutions that allow access to the test results after the AWS Cloud Formation load - testing stack is deleted.
Choose 2 answers.

  • A. Define an Amazon RDS Read-Replica in theload-testing AWS Cloud Formation stack and define a dependency relation betweenmaster and replica via the Depends On attribute.
  • B. Define an update policy to prevent deletionof the Amazon RDS database after the AWS Cloud Formation stack is deleted.
  • C. Define a deletion policy of type Retain forthe Amazon RDS resource to assure that the RDS database is not deleted with theAWS Cloud Formation stack.
  • D. Define a deletion policy of type Snapshotfor the Amazon RDS resource to assure that the RDS database can be restoredafter the AWS Cloud Formation stack is deleted.
  • E. Defineautomated backups with a backup retention period of 30 days for the Amazon RDSdatabase and perform point-in-time recovery of the database after the AWS CloudFormation stack is deleted.

Answer: CD

Explanation:
With the Deletion Policy attribute you can preserve or (in some cases) backup a resource when its stack is deleted. You specify a DeletionPolicy attribute for each resource that you want to control. If a resource has no DeletionPolicy attribute, AWS Cloud Formation deletes the resource by default.
To keep a resource when its stack is deleted, specify Retain for that resource. You can use retain for any resource. For example, you can retain a nested stack, S3 bucket, or CC2 instance so that you can continue to use or modify those resources after you delete their stacks.
For more information on Deletion policy, please visit the below url http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/aws-attri bute- deletionpolicy.html

NEW QUESTION 18
You have been given a business requirement to retain log files for your application for 10 years. You need to regularly retrieve the most recent logs for troubleshooting. Your logging system must be cost-effective, given the large volume of logs. What technique should you use to meet these requirements?

  • A. Store your log in Amazon CloudWatch Logs.
  • B. Store your logs in Amazon Glacier.
  • C. Store your logs in Amazon S3, and use lifecycle policies to archive to Amazon Glacier.
  • D. Store your logs on Amazon EBS, and use Amazon EBS snapshots to archive them.

Answer: C

Explanation:
Option A is invalid, because cloud watch will not store the logs indefinitely and secondly it won't be the cost effective option.
Option B is invalid, because it won't server the purpose of regularly retrieve the most recent logs for troubleshooting. You will need to pay more to retrieve the logs faster from this storage.
Option D is invalid, because it is not an ideal or cost effective option.
You can define lifecycle configuration rules for objects that have a well-defined lifecycle. For example: if you are uploading periodic logs to your bucket, your application might need these logs for a week or a month after creation, and after that you might want to delete them.
Some documents are frequently accessed for a limited period of time. After that, these documents are less frequently accessed. Over time, you might not need real-time access to these objects, but your organization or regulations might require you to archive them for a longer period and then optionally delete them later.
You might also upload some types of data to Amazon S3 primarily for archival purposes, for example digital media archives, financial and healthcare records, raw genomics sequence data, long-term database backups, and data that must be retained for regulatory compliance.
For more information on Lifecycle management please refer to the below link: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.htmI
Note:
Option C is the cheapest option, but Cloud watch can store logs indefinetly or between 10 years and
one day.
"Log Retention—By default, logs are kept indefinitely and never expire. You can adjust the retention policy for each log group, keeping the indefinite retention, or
choosing a retention periods between 10 years and one day." https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatlsCloudWatchLogs.html

NEW QUESTION 19
You have a requirement to automate the creation of EBS Snapshots. Which of the following can be
used to achieve this in the best way possible?

  • A. Createa powershell script which uses the AWS CLI to get the volumes and then run thescript as a cron job.
  • B. Usethe A WSConf ig service to create a snapshot of the AWS Volumes
  • C. Usethe AWS CodeDeploy service to create a snapshot of the AWS Volumes
  • D. UseCloudwatch Events to trigger the snapshots of EBS Volumes

Answer: D

Explanation:
The best is to use the inbuilt sen/ice from Cloudwatch, as Cloud watch Events to automate the creation of CBS Snapshots. With Option A, you would be restricted to
running the powrshell script on Windows machines and maintaining the script itself And then you have the overhead of having a separate instance just to run that script.
When you go to Cloudwatch events, you can use the Target as EC2 CreateSnapshot API call as shown below.
DOP-C01 dumps exhibit
The AWS Documentation mentions
Amazon Cloud Watch Cvents delivers a near real-time stream of system events that describe changes in Amazon Web Services (AWS) resources. Using simple rules that you can quickly set up, you can match events and route them to one or more target functions or streams. Cloud Watch Cvents becomes aware of operational changes as they occur. Cloud Watch Cvents responds to these operational changes and takes corrective action as necessary, by sending messages to respond to the environment, activating functions, making changes, and capturing state information.
For more information on Cloud watch Cvents, please visit the below U RL:
• http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatlsCloudWatchCvents.htmI

NEW QUESTION 20
You currently have a set of instances running on your Opswork stacks. You need to install security updates on these servers. What does AWS recommend in terms of how the security updates should be deployed?
Choose 2 answers from the options given below.

  • A. Createand start new instances to replace your current online instance
  • B. Then deletethe current instances.
  • C. Createa new Opswork stack with the new instances.
  • D. OnLinux-based instances in Chef 11.10 or older stacks, run the UpdateDependencies stack command.
  • E. Create a cloudformation template which can be used to replace the instances.

Answer: AC

Explanation:
The AWS Documentation mentions the following
By default, AWS OpsWorks Stacks automatically installs the latest updates during setup, after an instance finishes booting. AWS OpsWorks Stacks does not automatically install updates after an instance is online, to avoid interruptions such as restarting application servers. Instead, you manage updates to your online instances yourself, so you can minimize any disruptions.
We recommend that you use one of the following to update your online instances.
Create and start new instances to replace your current online instances. Then delete the current instances. The new instances will have the latest set of security patches installed during setup.
On Linux-based instances in Chef 11.10 or older stacks, run the Update Dependencies stack command, which installs the current set of security patches and other updates on the specified instances.
For more information on Opswork updates, please visit the below url • http://docs.aws.amazon.com/opsworks/latest/userguide/best-practices-updates. htmI

NEW QUESTION 21
You have a set of EC2 Instances in an Autoscaling Group that processes messages from an SQS queue. The messages contain the location in S3 from where video's need to be processed by the EC2 Instances. When a scale in happens, it is noticed that an at times that the EC2 Instance is still in a state of processing a video when the instance is terminated. How can you implement a solution which will ensure this does not happen?

  • A. ChangetheCoolDown property for the Auto scaling Group.
  • B. SuspendtheAZRebalance termination policy
  • C. Use lifecycle hooks to ensure the processing is complete before the termination occurs
  • D. Increase the minimum and maximum size for the Auto Scaling group, and change the scaling policies so they scale less dynamically

Answer: C

Explanation:
This is a case where lifecycle policies can be used. The lifecycle policy can be used to put the instance in a state of Terminating:Wait, complete the processing and then send a signal to complete the termination
Auto Scaling lifecycle hooks enable you to perform custom actions by pausing instances as Auto Scaling launches or terminates them. For example, while your newly launched instance is paused, you could install or configure software on it.
For more information on Autoscaling lifecycle hooks, please visit the below U RL:
• http://docs.aws.a mazon.com/autoscaling/latest/userguide/lifecycle-hooks.htmI

NEW QUESTION 22
The AWS Code Deploy service can be used to deploy code from which of the below mentioned source repositories. Choose 3 answers from the options given below

  • A. S3Buckets
  • B. GitHubrepositories
  • C. Subversionrepositories
  • D. Bit bucket repositories

Answer: ABD

Explanation:
The AWS documentation mentions the following
You can deploy a nearly unlimited variety of application content, such as code, web and configuration files, executables, packages, scripts, multimedia files, and so on. AWS CodeDeploy can deploy application content stored in Amazon S3 buckets, GitHub repositories, or Bitbucket repositories. You do not need to make changes to your existing code before you can use AWS CodeDeploy.
For more information on AWS Code Deploy, please refer to the below link:
• http://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html

NEW QUESTION 23
......

P.S. Allfreedumps.com now are offering 100% pass ensure DOP-C01 dumps! All DOP-C01 exam questions have been updated with correct answers: https://www.allfreedumps.com/DOP-C01-dumps.html (116 New Questions)