aiotestking uk

DOP-C01 Exam Questions - Online Test


DOP-C01 Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

Exambible offers free demo for DOP-C01 exam. "AWS Certified DevOps Engineer- Professional", also known as DOP-C01 exam, is a Amazon-Web-Services Certification. This set of posts, Passing the Amazon-Web-Services DOP-C01 exam, will help you answer those questions. The DOP-C01 Questions & Answers covers all the knowledge points of the real exam. 100% real Amazon-Web-Services DOP-C01 exams and revised by experts!

Free demo questions for Amazon-Web-Services DOP-C01 Exam Dumps Below:

NEW QUESTION 1
You recently encountered a major bug in your web application during a deployment cycle. During this failed deployment, it took the team four hours to roll back to a previously working state, which left customers with a poor user experience. During the post-mortem, you team discussed the need to provide a quicker, more robust way to roll back failed deployments. You currently run your web application on Amazon EC2 and use Elastic Load Balancingforyour load balancing needs.
Which technique should you use to solve this problem?

  • A. Createdeployable versioned bundles of your applicatio
  • B. Store the bundle on AmazonS3. Re- deploy your web application on Elastic Beanstalk and enable the ElasticBeanstalk auto - rollbackfeature tied to Cloud Watch metrics that definefailure.
  • C. Usean AWS OpsWorks stack to re-deploy your web application and use AWS OpsWorksDeploymentCommand to initiate a rollback during failures.
  • D. Createdeployable versioned bundles of your applicatio
  • E. Store the bundle on AmazonS3. Use an AWS OpsWorks stack to redeploy your web application and use AWSOpsWorks application versioningto initiate a rollback during failures.
  • F. UsingElastic BeanStalk redeploy your web application and use the Elastic BeanStalkAPI to trigger a FailedDeployment API call to initiate a rollback to theprevious version.

Answer: B

Explanation:
The AWS Documentation mentions the following
AWS DeploymentCommand has a rollback option in it. Following commands are available for apps to use:
deploy: Deploy App.
Ruby on Rails apps have an optional args parameter named migrate. Set Args to {"migrate":["true"]) to migrate the database.
The default setting is {"migrate": ["false"]).
The "rollback" feature Rolls the app back to the previous version.
When we are updating an app, AWS OpsWorks stores the previous versions, maximum of upto five versions.
We can use this command to roll an app back as many as four versions. Reference Link:
• http://docs^ws.amazon.com/opsworks/latest/APIReference/API_DeploymentCommand.html

NEW QUESTION 2
You are using Elastic Beanstalk to manage your e-commerce store. The store is based on an open source e- commerce platform and is deployed across multiple instances in an Auto Scaling group. Your development team often creates new "extensions" for the e-commerce store. These extensions include PHP source code as well as an SQL upgrade script used to make any necessary updates to the database schema. You have noticed that some extension deployments fail due to an error when running the SQL upgrade script. After further investigation, you realize that this is because the SQL script is being executed on all of your Amazon EC2 instances. How would you ensure that the SQL script is only executed once per deployment regardless of how many Amazon EC2 instances are running at the time?

  • A. Use a "Container command" within an Elastic Beanstalk configuration file to execute the script, ensuring that the "leader only" flag is set to true.
  • B. Make use of the Amazon EC2 metadata service to query whether the instance is marked as the leader" in the Auto Scaling grou
  • C. Only execute the script if "true" is returned.
  • D. Use a "Solo Command" within an Elastic Beanstalk configuration file to execute the scrip
  • E. The Elastic Beanstalk service will ensure that the command is only executed once.
  • F. Update the Amazon RDS security group to only allow write access from a single instance in the Auto Scaling group; that way, only one instance will successfully execute the script on the database.

Answer: A

Explanation:
You can use the container_commands key to execute commands that affect your application source code. Container commands run after the application and web server have been set up and the application version archive has been extracted, but before the application version is deployed. Non-container commands and other customization operations are performed prior to the application source code being extracted.
You can use leader_only to only run the command on a single instance, or configure a test to only run the command when a test command evaluates to true. Leader-only container commands are only executed during environment creation and deployments, while other commands and server customization operations are performed every time an instance is provisioned or updated. Leader- only container commands are not executed due to launch configuration changes, such as a change in the AMI Id or instance type. For more information on customizing containers, please visit the below URL:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html

NEW QUESTION 3
Your company is planning to setup a wordpress application. The wordpress application will connect to a MySQL database. Part of the requirement is to ensure that the database environment is fault
tolerant and highly available. Which of the following 2 options individually can help fulfil this requirement.

  • A. Create a MySQL RDS environment with Multi-AZ feature enabled
  • B. Create a MySQL RDS environment and create a Read Replica
  • C. Create multiple EC2 instances in the same A
  • D. Host MySQL and enable replication via scripts between the instances.
  • E. Create multiple EC2 instances in separate AZ'
  • F. Host MySQL and enable replication via scripts between the instances.

Answer: AD

Explanation:
One way to ensure high availability and fault tolerant environments is to ensure Instances are located across multiple availability zones. Hence if you are hosting
MySQL yourself, ensure you have instances spread across multiple AZ's
The AWS Documentation mentions the following about the multi-AZ feature
Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments. Amazon RDS uses several different technologies to provide failover support. Multi-AZ deployments for Oracle, PostgreSGL, MySQL, and MariaDB DB instances use Amazon's failover technology
For more information on AWS Multi-AZ deployments, please visit the below URL: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html

NEW QUESTION 4
What is web identity federation?

  • A. Use of an identity provider like Google or Facebook to become an AWS1AM User.
  • B. Use of an identity provider like Google or Facebook to exchange for temporary AWS security credentials.
  • C. Use of AWS 1AM Usertokens to log in as a Google or Facebook user.
  • D. Use STS service to create an user on AWS which will allow them to login from facebook orgoogle app.

Answer: B

Explanation:
With web identity federation, you don't need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) — such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an 1AM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don't have to embed and distribute long- term security credentials with your application. For more information on Web Identity federation please refer to the below link:
http://docs^ws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html

NEW QUESTION 5
The operations team and the development team want a single place to view both operating system and application logs. How should you implement this using A WS services? Choose two from the options below

  • A. Using AWS CloudFormation, create a Cloud Watch Logs LogGroup and send the operating system and application logs of interest using the Cloud Watch Logs Agent.
  • B. Using AWS CloudFormation and configuration management, set up remote logging to send events via UDP packets to CloudTrail.
  • C. Using configuration management, set up remote logging to send events to Amazon Kinesis and insert these into Amazon CloudSearch or Amazon Redshift, depending on available analytic tools.
  • D. Using AWS CloudFormation, merge the application logs with the operating system logs, and use 1AM Roles to allow both teams to have access to view console output from Amazon EC2.

Answer: AC

Explanation:
Option B is invalid because Cloudtrail is not designed specifically to take in UDP packets
Option D is invalid because there are already Cloudwatch logs available, so there is no need to have specific logs designed for this.
You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon L~C2) instances, AWS CloudTrail,
and other sources. You can then retrieve the associated log data from CloudWatch Logs. For more information on Cloudwatch logs please refer to the below link:
http://docs^ws.amazon.com/AmazonCloudWatch/latest/logs/WhatlsCloudWatchLogs.html You can the use Kinesis to process those logs
For more information on Amazon Kinesis please refer to the below link: http://docs.aws.a mazon.com/streams/latest/dev/introduction.html

NEW QUESTION 6
Which of the following services allows you to easily run and manage Docker-enabled applications across a cluster of Amazon EC2 instances

  • A. Elastic bean stalk
  • B. ElasticContainer service
  • C. Opswork
  • D. Cloudwatch

Answer: B

Explanation:
The AWS documentation provides the following information
Amazon EC2 Container Service (CCS) allows you to easily run and manage Docker-enabled applications across a cluster of Amazon EC2 instances. Applications packaged as containers locally will deploy and run in the same way as containers managed by Amazon ECS. Amazon CCS eliminates the need to install, operate, and scale your own cluster management infrastructure, and allows you to schedule Docker-enabled applications across your cluster based on your resource needs and availability requirements.
For more information on ECS, please visit the link:
• https://aws.amazon.com/ecs/details/

NEW QUESTION 7
A user is accessing RDS from an application. The user has enabled the Multi AZ feature with the MS SQL RDS DB. During a planned outage how will AWS ensure that a switch from DB to a standby replica will not affect access to the application?

  • A. RDS will have an internal IP which will redirect all requests to the new DB
  • B. RDS uses DNS to switch over to stand by replica for seamless transition
  • C. The switch over changes Hardware so RDS does not need to worry about access
  • D. RDS will have both the DBs running independently and the user has to manually switch over

Answer: B

Explanation:
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi- AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Cach AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable.
In case of an infrastructure failure (for example, instance hardware failure, storage failure, or network disruption), Amazon RDS performs an automatic failover to the standby, so that you can resume database operations as soon as the failover is complete.
And as per the AWS documentation, the cname is changed to the standby DB when the primary one fails.
Q: What happens during Multi-AZ failover and how long does it take?
"Failover is automatically handled by Amazon RDS so that you can resume database operations as quickly as possible without administrative intervention. When failing over, Amazon RDS simply flips the canonical name record (CNAMC) for your DB instance to point at the standby, which is in turn promoted to become the new primary. We encourage you to follow best practices and implement database connection retry at the application layer".
https://aws.amazon.com/rds/faqs/
Based on this, RDS Multi-AZ will use DNS to create the CNAM C and hence B is the right option. For more information on RDS Multi-AZ please visit the link:
http://docs.aws.a mazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.htm I

NEW QUESTION 8
Which of the following service can be used to provision ECS Cluster containing following components in an automated way:
1) Application Load Balancer for distributing traffic among various task instances running in EC2 Instances
2) Single task instance on each EC2 running as part of auto scaling group
3) Ability to support various types of deployment strategies

  • A. SAM
  • B. Opswork
  • C. Elastic beanstalk
  • D. CodeCommit

Answer: C

Explanation:
You can create docker environments that support multiple containers per Amazon CC2 instance with multi-container Docker platform for Elastic Beanstalk-Elastic Beanstalk uses Amazon Elastic Container Service (Amazon CCS) to coordinate container deployments to multi-container Docker environments. Amazon CCS provides tools to manage a cluster of instances running Docker containers. Elastic Beanstalk takes care of Amazon CCS tasks including cluster creation, task definition, and execution Please refer to the below AWS documentation: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_ecs.html

NEW QUESTION 9
You need to perform ad-hoc business analytics queries on well-structured data. Data comes in
constantly at a high velocity. Your business intelligence team can understand SQL.
What AWS service(s) should you look to first?

  • A. Kinesis Firehose + RDS
  • B. Kinesis Firehose+RedShift
  • C. EMR using Hive
  • D. EMR running Apache Spark

Answer: B

Explanation:
Amazon Kinesis Firehose is the easiest way to load streaming data into AWS. It can capture, transform, and load streaming data into Amazon Kinesis Analytics, Amazon S3, Amazon Redshift, and Amazon Oasticsearch Sen/ice, enabling near real-time analytics with existing business intelligence tools and
dashboards you're already using today. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing
administration. It can also batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security.
For more information on Kinesis firehose, please visit the below URL:
• https://aws.amazon.com/kinesis/firehose/
Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. You can start with just a few hundred gigabytes of data and scale to a petabyte or more. This enables you to use your data to acquire new insights for your business and customers. For more information on Redshift, please visit the below URL:
http://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html

NEW QUESTION 10
A company is building a two-tier web application to serve dynamic transaction-based content. The data tier is leveraging an Online Transactional Processing (OLTP) database. What services should you leverage to enable an elastic and scalable web tier?

  • A. ElasticLoad Balancing, Amazon EC2, and Auto Scaling
  • B. ElasticLoad Balancing, Amazon RDS with Multi-AZ, and Amazon S3
  • C. AmazonRDS with Multi-AZ andAuto Scaling
  • D. AmazonEC2, Amazon Dynamo DB, and Amazon S3

Answer: A

Explanation:
The question mentioned a scalable web tier and not a database tier. So Option C, D and B are already automated eliminated, since we do not need a database option. The below example shows an Elastic Load balancer connected to 2 CC2 instances connected via Auto Scaling. This is an example of an elastic and scalable web tier. By scalable we mean that the Auto scaling process will increase or decrease the number of CC2 instances as required.
DOP-C01 dumps exhibit
For more information on best practices for AWS Cloud applications, please visit the below URL: https://dO.awsstatic.com/whitepapers/AWS_Cloud_Best_Practices.pdf

NEW QUESTION 11
You are a Devops Enginneer in your company. You have been instructed to ensure there is an automated backup solution in place for EBS Volumes. These snapshots need to be retained only for a period of 20 days. How can you achieve this requirement in an efficient manner?

  • A. Usethe aws ec2 create-volume API to create a snapshot of the EBS Volum
  • B. The usethe describe- volume to see those snapshots which are greater than 20 days andthen delete them accordingly using the delete-volume API call.
  • C. UseLifecycle policies to push the EBS Volumes to Amazon Glacie
  • D. Then use furtherlifecycle policies to delete the snapshots after 20 days.
  • E. UseLifecycle policies to push the EBS Volumes to Amazon S3. Then use further lifecyclepolicies to delete the snapshots after 20 days.
  • F. Use Amazon Data Lifecycle Manager to automate the process.

Answer: D

Explanation:
Use Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation, retention, and deletion of snapshots taken to back up your Amazon EBS volumes.
Automating snapshot management helps you to:
•Protect valuable data by enforcing a regular backup schedule. Retain backups as required by auditors or internal compliance.
•Reduce storage costs by deleting outdated backups.
For more Information, Please check the below AWS Docs:
• https://docs.aws.amazon.com/AWSCC2/latest/UserGuide/snapshot-lifecycle.html

NEW QUESTION 12
You're building a mobile application game. The application needs permissions for each user to communicate and store data in DynamoDB tables. What is the best method for granting each mobile device that installs your application to access DynamoDB tables for storage when required? Choose the correct answer from the options below

  • A. During the install and game configuration process, have each user create an 1AM credential and assign the 1AM user to a group with proper permissions to communicate with DynamoDB.
  • B. Create an 1AM group that only gives access to your application and to the DynamoDB table
  • C. Then, when writing to DynamoDB, simply include the unique device ID to associate the data with that specific user.
  • D. Create an 1AM role with the proper permission policy to communicate with the DynamoDB tabl
  • E. Use web identity federation, which assumes the 1AM role using AssumeRoleWithWebldentity, when the user signs in, granting temporary security credentials using STS.
  • F. Create an Active Directory server and an AD user for each mobile application use
  • G. When the user signs in to the AD sign-on, allow the AD server to federate using SAML 2.0 to 1AM and assign a role to the AD user which is the assumed with AssumeRoleWithSAML

Answer: C

Explanation:
Answer - C
For access to any AWS service, the ideal approach for any application is to use Roles. This is the first preference.
For more information on 1AM policies please refer to the below link:
http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
Next for any web application, you need to use web identity federation. Hence option D is the right option. This along with the usage of roles is highly stressed in the aws documentation.
The AWS documentation mentions the following
When developing a web application it is recommend not to embed or distribute long-term AWS credentials with apps that a user downloads to a device, even in an encrypted store. Instead, build your app so that it requests temporary AWS security credentials dynamically when needed using web identity federation. The
supplied temporary credentials map to an AWS role that has only the permissions needed to perform the tasks required by the mobile app.
For more information on web identity federation please refer to the below link: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html

NEW QUESTION 13
When one creates an encrypted EBS volume and attach it to a supported instance type ,which of the following data types are encrypted?
Choose 3 answers from the options below

  • A. Dataat rest inside the volume
  • B. Alldata copied from the EBS volume to S3
  • C. Alldata moving between the volume and the instance
  • D. Allsnapshots created from the volume

Answer: ACD

Explanation:
This is clearly given in the aws documentation. Amazon EBS Encryption
Amazon CBS encryption offers a simple encryption solution for your CBS volumes without the need to build, maintain, and secure your own key management infrastructure. When you create an encrypted CBS volume and attach it to a supported instance type, the following types of data are encrypted:
• Data at rest inside the volume
• All data moving between the volume and the instance
• All snapshots created from the volume
• All volumes created from those snapshots
For more information on CBS encryption, please refer to the below url http://docs.aws.a mazon.com/AWSCC2/latest/UserGuide/CBSCncryption.html

NEW QUESTION 14
There is a requirement for an application hosted on a VPC to access the On-premise LDAP server. The VPC and the On-premise location are connected via an I PSec VPN. Which of the below are the right options for the application to authenticate each user. Choose 2 answers from the options below

  • A. Develop an identity broker that authenticates against 1AM security Token service to assume a 1AM role in order to get temporary AWS security credentials The application calls the identity broker to get AWS temporary security credentials.
  • B. The application authenticates against LDAP and retrieves the name of an 1AM role associated with the use
  • C. The application then calls the 1AM Security Token Service to assume that 1AM rol
  • D. The application can use the temporary credentials to access any AWS resources.
  • E. Develop an identity broker that authenticates against LDAP and then calls 1AM Security Token Service to get 1AM federated user credential
  • F. The application calls the identity broker to get 1AM federated user credentials with access to the appropriate AWS service.
  • G. The application authenticates against LDAP the application then calls the AWS identity and Access Management (1AM) Security service to log in to 1AM using the LDAP credentials the application can use the 1AM temporary credentials to access the appropriate AWS service.

Answer: BC

Explanation:
When you have the need for an in-premise environment to work with a cloud environment, you would normally have 2 artefacts for authentication purposes
• An identity store - So this is the on-premise store such as Active Directory which stores all the information for the user's and the groups they below to.
• An identity broker - This is used as an intermediate agent between the on-premise location and the cloud environment. In Windows you have a system known as Active Directory Federation services to provide this facility.
Hence in the above case, you need to have an identity broker which can work with the identity store and the Security Token service in aws. An example diagram of how this works from the aws documentation is given below.
DOP-C01 dumps exhibit
For more information on federated access, please visit the below link: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_federated- users.htm I

NEW QUESTION 15
Your CTO has asked you to make sure that you know what all users of your AWS account are doing to change resources at all times. She wants a report of who is doing what over time, reported to her once per week, for as broad a resource type group as possible. How should you do this?

  • A. Create a global AWS CloudTrail Trai
  • B. Configure a script to aggregate the log data delivered to S3 once per week and deliver this to the CTO.
  • C. Use CloudWatch Events Rules with an SNS topic subscribed to all AWS API call
  • D. Subscribe the CTO to an email type delivery on this SNS Topic.
  • E. Use AWS 1AM credential reports to deliver a CSV of all uses of 1AM UserTokens overtime to the CTO.
  • F. Use AWS Config with an SNS subscription on a Lambda, and insert these changes over time into a DynamoDB tabl
  • G. Generate reports based on the contents of this table.

Answer: A

Explanation:
AWS CloudTrail is an AWS service that helps you enable governance, compliance, and operational and risk auditing of your AWS account. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail. Events include actions taken in the AWS Management Console, AWS Command Line Interface, and AWS SDKs and APIs.
Visibility into your AWS account activity is a key aspect of security and operational best practices. You can use CloudTrail to view, search, download, archive, analyze, and respond to account activity across your AWS infrastructure. You can identify who or what took which action, what resources were acted upon, when the event occurred, and other details to help you analyze and respond to activity in your AWS account.
For more information on Cloudtrail, please visit the below URL:
• http://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html

NEW QUESTION 16
Which of the following are the basic stages of a CI/CD Pipeline. Choose 3 answers from the options below

  • A. SourceControl
  • B. Build
  • C. Run
  • D. Production

Answer: ABD

Explanation:
The below diagram shows the stages of a typical CI/CD pipeline
DOP-C01 dumps exhibit
For more information on AWS Continuous Integration, please visit the below URL: https://da.wsstatic.com/whitepapers/DevOps/practicing-continuous-integration-continuous- delivery-on- AWS.pdf

NEW QUESTION 17
You have an Opswork stack setup in AWS. You want to install some updates to the Linux instances in the stack. Which of the following can be used to publish those updates. Choose 2 answers from the options given below

  • A. Create and start new instances to replace your current online instance
  • B. Then delete the current instances.
  • C. Use Auto-scaling to launch new instances and then delete the older instances
  • D. On Linux-based instances in Chef 11.10 or older stacks, run the Update Dependencies stack command
  • E. Delete the stack and create a new stack with the instances and their relavant updates

Answer: AC

Explanation:
As per AWS documentation.
By default, AWS OpsWorks Stacks automatically installs the latest updates during setup, after an instance finishes booting. AWS OpsWorks Stacks does not automatically install updates after an instance is online, to avoid interruptions such as restarting application servers. Instead, you manage updates to your online instances yourself, so you can minimize any disruptions.
We recommend that you use one of the following to update your online instances.
•Create and start new instances to replace your current online instances. Then delete the current instances.
The new instances will have the latest set of security patches installed during setup.
•On Linux-based instances in Chef 11.10 or older stacks, run the Update Dependencies stack command, which installs the current set of security patches and other updates
on the specified instances.
More information is available at: https://docs.aws.amazon.com/opsworks/latest/userguide/workingsecurity-updates.html

NEW QUESTION 18
You run a multi-tier architecture on AWS with webserver instances running Nginx. Your users are getting errors when they use the web application. How can diagnose the errors quickly and efficiently

  • A. Installthe Cloud Watch Logs agent and send Nginx access log data to CloudWatc
  • B. Fromthere, pipe the log data through to a third party logging and graphing tool.
  • C. Installthe CloudWatch Logs agent and send Nginx access log data to CloudWatc
  • D. Then/filter the log streams for searching the relevant errors.
  • E. Sendall the errors to AWS Lambda for processing.
  • F. Sendall the errors to AWS Config for processing

Answer: B

Explanation:
The AWS Documentation mentions the following
You use metric filters to search for and match terms, phrases, or values in your log events. When a metric filter finds one of the terms, phrases, or values in your log events, you can increment the value of a CloudWatch metric. For example, you can create a metric filter to search for and count the occurrence of the word CRROR in your log events.
For more information on Cloudwatch logs Analysis, please see the below link:
• http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/ FilterAndPatternSyntax.html

NEW QUESTION 19
You are a Devops engineer for your company. The company hosts a web application that is hosted on a single EC2 Instance. The end users are complaining of slow response times for the application. Which of the following can be used to effectively scale the application?

  • A. UseAutoscaling Groups to launch multiple instances and place them behind an ELB.
  • B. UseAutoscaling launch configurations to launch multiple instances and place thembehing an ELB.
  • C. UseAmazonRDS with the Multi-AZ feature.
  • D. UseCloudformation to deploy the app again with an Amazon RDS with the Multi-AZfeature.

Answer: A

Explanation:
The AWS Documentation mentions the below
When you use Auto Scaling, you can automatically increase the size of your Auto Scalinggroup when demand goes up and decrease it when demand goes down. As Auto Scaling adds and removes CC2 instances, you must ensure that the traffic for your application is distributed across all of your CC2 instances. The Clastic Load Balancing service automatically routes incoming web traffic across such a dynamically changing number of L~C2 instances. Your load balancer acts as a single point
of contact for all incoming traffic to the instances in your Auto Scalinggroup. For more information on Autoscaling and ELB, please refer to the below link:
• http://docs.aws.a mazon.com/autoscaling/latest/userguide/autosca I ing-load-balancer.html

NEW QUESTION 20
You are a DevOps engineer for a company. You have been requested to create a rolling deployment solution that is cost-effective with minimal downtime. How should you achieve this? Choose two answers from the options below

  • A. Re-deploy your application using a CloudFormation template to deploy Elastic Beanstalk
  • B. Re-deploy with a CloudFormation template, define update policies on Auto Scalinggroups in your CloudFormation template
  • C. Use UpdatePolicy attribute to specify how CloudFormation handles updates to Auto Scaling Group resource.
  • D. After each stack is deployed, tear down the old stack

Answer: BC

Explanation:
The AWS::AutoScaling::AutoScalingGroup resource supports an UpdatePolicy attribute. This is used to define how an Auto Scalinggroup resource is updated when
an update to the Cloud Formation stack occurs. A common approach to updating an Auto Scaling group is to perform a rolling update, which is done by specifying the
AutoScalingRollingUpdate policy. This retains the same Auto Scalinggroup and replaces old instances with new ones, according to the parameters specified.
Option A is invalid because it is not efficient to use Cloudformation to use Clastic Beanstalk.
Option D is invalid because this is an inefficient process to tear down stacks when there are stack policies available
For more information on Autoscaling Rolling Updates please refer to the below link:
• https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-group-rolling- updates/

NEW QUESTION 21
When creating an Elastic Beanstalk environment using the Wizard, what are the 3 configuration options presented to you

  • A. Choosingthetypeof Environment- Web or Worker environment
  • B. Choosingtheplatformtype-Nodejs,IIS,etc
  • C. Choosing the type of Notification - SNS or SQS
  • D. Choosing whether you want a highly available environment or not

Answer: ABD

Explanation:
The below screens are what are presented to you when creating an Elastic Beanstalk environment
DOP-C01 dumps exhibit
The high availability preset includes a load balancer; the low cost preset does not For more information on the configuration settings, please refer to the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-create-wizard.html

NEW QUESTION 22
You have a web application hosted on EC2 instances. There are application changes which happen to the web application on a quarterly basis. Which of the following are example of Blue Green deployments which can be applied to the application? Choose 2 answers from the options given below

  • A. Deploythe application to an elastic beanstalk environmen
  • B. Have a secondary elasticbeanstalk environment in place with the updated application cod
  • C. Use the swapURL's feature to switch onto the new environment.
  • D. Placethe EC2 instances behind an EL
  • E. Have a secondary environment with EC2lnstances and ELB in another regio
  • F. Use Route53 with geo-location to routerequests and switch over to the secondary environment.
  • G. Deploythe application using Opswork stack
  • H. Have a secondary stack for the newapplication deploymen
  • I. Use Route53 to switch over to the new stack for the newapplication update.
  • J. Deploythe application to an elastic beanstalk environmen
  • K. Use the Rolling updatesfeature to perform a Blue Green deployment.

Answer: AC

Explanation:
The AWS Documentation mentions the following
AWS Elastic Beanstalk is a fast and simple way to get an application up and running on AWS.6 It's perfect for developers who want to deploy code without worrying about managing the underlying infrastructure. Elastic Beanstalk supports Auto Scaling and Elastic Load Balancing, both of which enable blue/green deployment.
Elastic Beanstalk makes it easy to run multiple versions of your application and provides capabilities to swap the environment URLs, facilitating blue/green deployment.
AWS OpsWorks is a configuration management service based on Chef that allows customers to deploy and manage application stacks on AWS.7 Customers can specify resource and application configuration, and deploy and monitor running resources. OpsWorks simplifies cloning entire stacks when you're preparing blue/green environments.
For more information on Blue Green deployments, please refer to the below link:
• https://dO3wsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

NEW QUESTION 23
......

Recommend!! Get the Full DOP-C01 dumps in VCE and PDF From Allfreedumps.com, Welcome to Download: https://www.allfreedumps.com/DOP-C01-dumps.html (New 116 Q&As Version)