aiotestking uk

DOP-C01 Exam Questions - Online Test


DOP-C01 Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

Pass4sure offers free demo for DOP-C01 exam. "AWS Certified DevOps Engineer- Professional", also known as DOP-C01 exam, is a Amazon-Web-Services Certification. This set of posts, Passing the Amazon-Web-Services DOP-C01 exam, will help you answer those questions. The DOP-C01 Questions & Answers covers all the knowledge points of the real exam. 100% real Amazon-Web-Services DOP-C01 exams and revised by experts!

Amazon-Web-Services DOP-C01 Free Dumps Questions Online, Read and Test Now.

NEW QUESTION 1
When storing sensitive data on the cloud which of the below options should be carried out on AWS. Choose 3 answers from the options given below.

  • A. WithAWS you do not need to worry about encryption
  • B. EnableEBS Encryption
  • C. Encryptthe file system on an EBS volume using Linux tools
  • D. EnableS3 Encryption

Answer: BCD

Explanation:
Amazon CBS encryption offers you a simple encryption solution for your CBS volumes without the need for you to build, maintain, and secure your own key management infrastructure. When you create an encrypted CBS volume and attach it to a supported instance type, the following types of data are encrypted:
Data at rest inside the volume
All data moving between the volume and the instance
All snapshots created from the volume For more information on CBS encryption, please refer to the below link:
• http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/CBSCncryption.htrril
Data protection refers to protecting data while in-transit (as it travels to and from Amazon S3) and at rest (while it is stored on disks in Amazon S3 data centers). You can protect data in transit by using SSL or by using client-side encryption. For more information on S3 encryption, please refer to the below link:
• http://docs-aws.amazon.com/AmazonS3/latest/dev/UsingCncryption.html

NEW QUESTION 2
Your company needs to automate 3 layers of a large cloud deployment. You want to be able to track this deployment's evolution as it changes over time, and carefully control any alterations. What is a good way to automate a stack to meet these requirements?

  • A. Use OpsWorks Stacks with three layers to model the layering in your stack.
  • B. Use CloudFormation Nested Stack Templates, with three child stacks to represent the three logical layers of your cloud.
  • C. Use AWS Config to declare a configuration set that AWS should roll out to your cloud.
  • D. Use Elastic Beanstalk Linked Applications, passing the important DNS entires between layers using the metadata interface.

Answer: B

Explanation:
As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single,
unified stack. Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS:: Cloud Form ation::Stackresource in your template to reference other templates.
For more information on nested stacks, please visit the below URL:
• http://docs^ws.amazon.com/AWSCIoudFormation/latest/UserGuide/best-practices.html#nested Note:
The query is, how you can automate a stack over the period of time, when changes are required, with out recreating the stack.
The function of Nested Stacks are to reuse Common Template Patterns.
For example, assume that you have a load balancer configuration that you use for most of your stacks. Instead of copying and pasting the same configurations into your templates, you can create a dedicated template for the load balancer. Then, you just use the resource to reference that template from within other templates.
Yet another example is if you have a launch configuration with certain specific configuration and you need to change the instance size only in the production environment and to leave it as it is in the development environment.
AWS also recommends that updates to nested stacks are run from the parent stack.
When you apply template changes to update a top-level stack, AWS CloudFormation updates the top-level stack and initiates an update to its nested stacks. AWS
Cloud Formation updates the resources of modified nested stacks, but does not update the resources of unmodified nested stacks.

NEW QUESTION 3
You have an Opswork stack defined with Linux instances. You have executed a recipe, but the execution has failed. What is one of the ways that you can use to diagnose what was the reason why the recipe did not execute correctly.

  • A. UseAWS Cloudtrail and check the Opswork logs to diagnose the error
  • B. UseAWS Config and check the Opswork logs to diagnose the error
  • C. Logintotheinstanceandcheckiftherecipewasproperlyconfigured.
  • D. Deregisterthe instance and check the EC2 Logs

Answer: C

Explanation:
The AWS Documentation mentions the following
If a recipe fails, the instance will end up in the setup_failed state instead of online. Even though the instance is not online as far as AWS Ops Works Stacks is concerned, the CC2 instance is running and it's often useful to log in to troubleshoot the issue. For example, you can check whether an application or custom
cookbook is correctly installed. The AWS Ops Works Stacks built-in support for SSH and RDP login is
available only for instances in the online state.
For more information on Opswork troubleshooting, please visit the below URL: http://docs.aws.amazon.com/opsworks/latest/userguide/troubleshoot-debug-login.htmI

NEW QUESTION 4
You are responsible for your company's large multi-tiered Windows-based web application running on Amazon EC2 instances situated behind a load balancer. While reviewing metrics, you've started noticing an upwards trend for slow customer page load time. Your manager has asked you to come up with a solution to ensure that customer load time is not affected by too many requests per second. Which technique would you use to solve this issue?

  • A. Re-deploy your infrastructure usingan AWS CloudFormation templat
  • B. Configure Elastic Load Balancing health checks to initiate a new AWS CloudFormation stack when health checks return failed.
  • C. Re-deploy your infrastructure using an AWS CloudFormation templat
  • D. Spin up a second AWS CloudFormation stac
  • E. Configure Elastic Load Balancing SpillOver functionality to spill over any slow connections to the second AWS CloudFormation stack.
  • F. Re-deploy your infrastructure using AWS CloudFormation, Elastic Beanstalk, and Auto Scalin
  • G. Setup your Auto Scalinggroup policies to scale based on the number of requests per second as well as the current customer load tim
  • H. •>/D- Re-deploy your application using an Auto Scaling templat
  • I. Configure the Auto Scaling template to spin up a new Elastic Beanstalk application when the customer load time surpasses your threshold.

Answer: C

Explanation:
Auto Scaling helps you ensure that you have the correct number of Amazon CC2 instances available to handle the load for your application. You create collections of
CC2 instances, called Auto Scaling groups. You can specify the minimum number of instances in each Auto Scaling group, and Auto Scaling ensures that your group
never goes below this size. You can specify the maximum number of instances in each Auto Scaling group, and Auto Scaling ensures that yourgroup never goes
above this size. If you specify the desired capacity, either when you create the group or at any time thereafter. Auto Scaling ensures that yourgroup has this many
instances. If you specify scaling policies, then Auto Scaling can launch or terminate instances as demand on your application increases or decreases.
Option A and B are invalid because Autoscaling is required to solve the issue to ensure the application can handle high traffic loads.
Option D is invalid because there is no Autoscaling template.
For more information on Autoscaling, please refer to the below document link: from AWS http://docs.aws.amazon.com/autoscaling/latest/userguide/Whatl sAutoScaling.html

NEW QUESTION 5
You have a setup in AWS which consists of EC2 Instances sitting behind and ELB. The launching and termination of the Instances are controlled via an Autoscaling Group. The architecture consists of a MySQL AWS RDS database. Which of the following can be used to induce one more step towards a self-healing architecture for this design?

  • A. EnableReadReplica'sfortheAWSRDSdatabase.
  • B. EnableMulti-AZ feature for the AWS RDS database.
  • C. Createone more ELB in another region forfault tolerance
  • D. Createone more Autoscaling Group in another region forfault tolerance

Answer: B

Explanation:
The AWS documentation mentions the following
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi- AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Cach AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable.
In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
For more information on AWS RDS Multi-AZ, please refer to the below link:
◆ https://aws.amazon.com/rds/details/multi-az/

NEW QUESTION 6
As part of your continuous deployment process, your application undergoes an I/O load performance test before it is deployed to production using new AMIs. The application uses one Amazon Elastic Block Store (EBS) PIOPS volume per instance and requires consistent I/O performance. Which of the following must be carried out to ensure that I/O load performance tests yield the correct results in a repeatable manner?

  • A. Ensure that the I/O block sizes for the test are randomly selected.
  • B. Ensure that the Amazon EBS volumes have been pre-warmed by reading all the blocks before the test.
  • C. Ensure that snapshots of the Amazon EBS volumes are created as a backup.
  • D. Ensure that the Amazon EBS volume is encrypted.

Answer: B

Explanation:
During the AMI-creation process, Amazon CC2 creates snapshots of your instance's root volume and any other CBS volumes attached to your instance
New CBS volumes receive their maximum performance the moment that they are available and do not require initialization (formerly known as pre-warming).
However, storage blocks on volumes that were restored from snapshots must to initialized (pulled
down from Amazon S3 and written to the volume) before you can access the block. This preliminary action takes time and can cause a significant increase in the latency of an I/O operation the first time each block is accessed. For most applications, amortizing this cost over the lifetime of the volume is acceptable.
Option A is invalid because block sizes are predetermined and should not be randomly selected. Option C is invalid because this is part of continuous integration and hence volumes can be destroyed after the test and hence there should not be snapshots created unnecessarily
Option D is invalid because the encryption is a security feature and not part of load tests normally. For more information on CBS initialization please refer to the below link:
• http://docs.aws.a mazon.com/AWSCC2/latest/UserGuide/ebs-in itialize.html

NEW QUESTION 7
You have enabled Elastic Load Balancing HTTP health checking. After looking at the AWS Management Console, you see that all instances are passing health checks, but your customers are reporting that your site is not responding. What is the cause?

  • A. The HTTP health checking system is misreportingdue to latency in inter-instance metadata synchronization.
  • B. The health check in place is not sufficiently evaluating the application function.
  • C. The application is returning a positive health check too quickly for the AWS Management Console to respond.D- Latency in DNS resolution is interfering with Amazon EC2 metadata retrieval.

Answer: B

Explanation:
You need to have a custom health check which will evaluate the application functionality. Its not enough using the normal health checks. If the application functionality does not work and if you don't have custom health checks, the instances will still be deemed as healthy.
If you have custom health checks, you can send the information from your health checks to Auto Scaling so that Auto Scaling can use this information. For example, if you determine that an instance is not functioning as expected, you can set the health status of the instance to Unhealthy. The next time that Auto Scaling performs a health check on the instance, it will determine that the instance is unhealthy and then launch a replacement instance
For more information on Autoscaling health checks, please refer to the below document link: from AWS
http://docs.aws.amazon.com/autoscaling/latest/userguide/healthcheck.html

NEW QUESTION 8
The development team has developed a new feature that uses an AWS service and wants to test it from inside a staging VPC. How should you test this feature with the fastest turnaround time?

  • A. Launchan Amazon Elastic Compute Cloud (EC2) instance in the staging VPC in responseto a development request, and use configuration management to set up theapplicatio
  • B. Run any testing harnesses to verify application functionality andthen use Amazon Simple Notification Service (SNS) to notify the developmentteam of the results.
  • C. Usean Amazon EC2 instance that frequently polls the version control system todetect the new feature, use AWS CloudFormation and Amazon EC2 user data to runany testing harnesses to verify application functionality and then use AmazonSNS to notify the development team of the results.
  • D. Usean Elastic Beanstalk application that polls the version control system todetect the new feature, use AWS CloudFormation and Amazon EC2 user data to runany testing harnesses to verify application functionality and then use AmazonKinesis to notify the development team of the results.
  • E. UseAWS CloudFormation to launch an Amazon EC2 instance use Amazon EC2 user data torun any testing harnesses to verify application functionality and then useAmazon Kinesis to notify the development team of the results.

Answer: A

Explanation:
Using Amazon Kinesis would just take more time in setup and would not be ideal to notify the relevant team in the shortest time possible.
Since the test needs to be conducted in the staging VPC, it is best to launch the CC2 in the staging VPC.
For more information on the Simple Notification service, please visit the link:
• https://aws.amazon.com/sns/

NEW QUESTION 9
You have a web application that is currently running on a three M3 instances in three AZs. You have an Auto Scaling group configured to scale from three to thirty instances. When reviewing your Cloud Watch metrics, you see that sometimes your Auto Scalinggroup is hosting fifteen instances. The web application is reading and writing to a DynamoDB.configured backend and configured with 800 Write Capacity Units and 800 Read Capacity Units. Your DynamoDB Primary Key is the Company ID. You are hosting 25 TB of data in your web application. You have a single customer that is complaining of long load times when their staff arrives at the office at 9:00 AM and loads the website, which consists of content that is pulled from DynamoDB. You have other customers who routinely use the web application. Choose the answer that will ensure high availability and reduce the customer's access times.

  • A. Adda caching layer in front of your web application by choosing ElastiCacheMemcached instances in one of the AZs.
  • B. Doublethe number of Read Capacity Units in your DynamoDB instance because theinstance isprobably being throttled when the customer accesses the website andyour web application.
  • C. Changeyour Auto Scalinggroup configuration to use Amazon C3 instance types, becausethe web application layer is probably running out of compute capacity.
  • D. Implementan Amazon SQS queue between your DynamoDB database layer and the webapplication layer to minimize the large burst in traffic the customergenerateswhen everyone arrives at the office at 9:00AM and begins accessing the website.
  • E. Usedata pipelines to migrate your DynamoDB table to a new DynamoDB table with aprimary key that is evenly distributed across your datase
  • F. Update your webappl ication to request data from the new table

Answer: E

Explanation:
The AWS documentation provide the following information on the best performance for DynamoDB tables
The optimal usage of a table's provisioned throughput depends on these factors: The primary key selection.
The workload patterns on individual items. The primary key uniquely identifies each item in a table. The primary key can be simple (partition key) or composite (partition key and sort key). When it stores data, DynamoDB divides a table's items into multiple partitions, and distributes the data primarily based upon the partition key value. Consequently, to achieve the full amount of request throughput you have provisioned for a table, keep your workload spread evenly across the partition key values. Distributing requests across partition key values distributes the requests across partitions. For more information on DynamoDB best practises please visit the link:
• http://docs.aws.a mazon.com/amazondynamodb/latest/developerguide/Guide I inesForTables.htm I
Note: One of the AWS forumns is explaining the steps for this process in detail. Based on that, while importing data from S3 using datapipeline to a new table in dynamodb we can create a new index. Please find the steps given below.
DOP-C01 dumps exhibit

NEW QUESTION 10
You are planning on using the Amazon RDS facility for Fault tolerance for your application. How does Amazon RDSmuIti Availability Zone model work

  • A. A second, standby database is deployed and maintained in a different availability zone from master, using synchronous replication.
  • B. A second, standby database is deployed and maintained in a different availability zone from master using asynchronous replication.
  • C. A second, standby database is deployed and maintained in a different region from master using asynchronous replication.
  • D. A second, standby database is deployed and maintained in a different region from master using synchronous replication.

Answer: A

Explanation:
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB)
Instances, making them a natural fit for production database
workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a
standby instance in a different Availability Zone (AZ). Cach AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable.
In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can
resume database operations as soon as the failover is complete.
The below diagram from the AWS documentation shows how this is configured
DOP-C01 dumps exhibit
Option B is invalid because the replication is synchronous.
Option C and D are invalid because this is built around AZ and not regions. For more information on Multi-AZ RDS, please visit the below URL: https://aws.amazon.com/rds/details/multi-az/

NEW QUESTION 11
When using EC2 instances with the Code Deploy service, which of the following are some of the pre- requisites to ensure that the EC2 instances can work with Code Deploy. Choose 2 answers from the options given below

  • A. Ensurean 1AM role is attached to the instance so that it can work with the CodeDeploy Service.
  • B. Ensurethe EC2 Instance is configured with Enhanced Networking
  • C. Ensurethe EC2 Instance is placed in the default VPC
  • D. Ensurethat the CodeDeploy agent is installed on the EC2 Instance

Answer: AD

Explanation:
This is mentioned in the AWS documentation
DOP-C01 dumps exhibit
For more information on instances for CodeDeploy, please visit the below URL:
• http://docs.aws.amazon.com/codedeploY/latest/userguide/instances.html

NEW QUESTION 12
You are using Elastic Beanstalk to manage your application. You have a SQL script that needs to only be executed once per deployment no matter how many EC2 instances you have running. How can you do this?

  • A. Use a "Container command" within an Elastic Beanstalk configuration file to execute the script, ensuring that the "leader only" flag is set to false.
  • B. Use Elastic Beanstalk version and a configuration file to execute the script, ensuring that the "leader only" flag is set to true.
  • C. Use a "Container command" within an Elastic Beanstalk configuration file to execute the script, ensuring that the "leader only" flag is set to true.
  • D. Use a "leader command" within an Elastic Beanstalk configuration file to execute the script, ensuring that the "container only" flag is set to true.

Answer: C

Explanation:
You can use the container_commands key to execute commands that affect your application source code. Container commands run after the application and web server have been set up and the application version archive has been extracted, but before the application version is deployed. Non- container commands and other customization operations are performed prior to the application source code being extracted.
You can use leader_only to only run the command on a single instance, or configure a test to only run the command when a test command evaluates to true. Leader-only container commands are only executed during environment creation and deployments, while other commands and server customization operations are performed every time an instance is provisioned or updated. Leader- only container commands are not executed due to launch configuration changes, such as a change in the AMI Id or instance type. For more information on customizing containers, please visit the below URL:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html

NEW QUESTION 13
You have an Autoscaling Group which is launching a set of t2.small instances. You now need to replace those instances with a larger instance type. How would you go about making this change in an ideal manner?

  • A. Changethe Instance type in the current launch configuration to the new instance type.
  • B. Createanother Autoscaling Group and attach the new instance type.
  • C. Createa new launch configuration with the new instance type and update yourAutoscaling Group.
  • D. Changethe Instance type of the Underlying EC2 instance directly.

Answer: C

Explanation:
Answer - C
The AWS Documentation mentions
A launch configuration is a template that an Auto Scaling group uses to launch EC2 instances. When you create a launch configuration, you specify information for the instances such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping. If you've launched an EC2 instance before, you specified the same information in order to launch the instance. When you create an Auto Scalinggroup, you must specify a launch configuration. You can specify your launch configuration with multiple Auto Scaling groups.
However, you can only specify one launch configuration for an Auto Scalinggroup at a time, and you can't modify a launch configuration after you've created it.
Therefore, if you want to change the launch configuration for your Auto Scalinggroup, you must create a launch configuration and then update your Auto Scaling group with the new launch configuration.
For more information on launch configurations please see the below link:
• http://docs.aws.amazon.com/autoscaling/latest/userguide/l_au nchConfiguration.html

NEW QUESTION 14
Which of the following can be configured as targets for Cloudwatch Events. Choose 3 answers from
the options given below

  • A. AmazonEC2 Instances
  • B. AWSLambda Functions
  • C. AmazonCodeCommit
  • D. AmazonECS Tasks

Answer: ABD

Explanation:
The AWS Documentation mentions the below
You can configure the following AWS sen/ices as targets for Cloud Watch Events
DOP-C01 dumps exhibit
For more information on Cloudwatch events please see the below link:
• http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatlsCloudWatchEvents.htmI

NEW QUESTION 15
You are a Devops Engineer for your company. You are in charge of an application that uses EC2, ELB and Autoscaling. You have been requested to get the ELB access logs. When you try to access the logs, you can see that nothing has been recorded in S3. Why is this the case?

  • A. Youdon't have the necessary access to the logs generated by ELB.
  • B. Bydefault ELB access logs are disabled.
  • C. TheAutoscaling service is not sending the required logs to ELB
  • D. TheEC2 Instances are not sending the required logs to ELB

Answer: B

Explanation:
The AWS Documentation mentions
Access logging is an optional feature of Elastic Load Balancing that is disabled by default. After you enable access logging for your load balancer. Clastic Load
Balancing captures the logs and stores them in the Amazon S3 bucket that you specify. You can disable access logging at any time.
For more information on L~LB access logs please see the below link:
• http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/access-log-collection.html

NEW QUESTION 16
You have a current Clouformation template defines in AWS. You need to change the current alarm threshold defined in the Cloudwatch alarm. How can you achieve this?

  • A. Currently, there is no option to change what is already defined in Cloudformation templates.
  • B. Update the template and then update the stack with the new templat
  • C. Automatically all resources will be changed in the stack.
  • D. Update the template and then update the stack with the new templat
  • E. Only those resources that need to be changed will be change
  • F. All other resources which do not need to be changed will remain as they are.
  • G. Delete the current cloudformation templat
  • H. Create a new one which will update the current resources.

Answer: C

Explanation:
Option A is incorrect because Cloudformation templates have the option to update resources.
Option B is incorrect because only those resources that need to be changed as part of the stack update are actually updated.
Option D is incorrect because deleting the stack is not the ideal option when you already have a change option available.
When you need to make changes to a stack's settings or change its resources, you update the stack instead of deleting it and creating a new stack. For example, if you
have a stack with an EC2 instance, you can update the stack to change the instance's AMI ID.
When you update a stack, you submit changes, such as new input parameter values or an updated template. AWS CloudFormation compares the changes you submit with the current state of your stack and updates only the changed resources
For more information on stack updates please refer to the below link:
• http://docs.aws.a mazon.com/AWSCIoudFormation/latest/UserGuide/using-cfn-updating- stacks.htmI

NEW QUESTION 17
You have a web application that's developed in Node.js The code is hosted in Git repository. You want to now deploy this application to AWS. Which of the below 2 options can fulfil this requirement.

  • A. Create an Elastic Beanstalk applicatio
  • B. Create a Docker file to install Node.j
  • C. Get the code from Gi
  • D. Use the command "aws git.push" to deploy the application
  • E. Create an AWS CloudFormation template which creates an instance with the AWS::EC2::Container resources typ
  • F. With UserData, install Git to download the Node.js application and then set it up.
  • G. Create a Docker file to install Node.j
  • H. and gets the code from Gi
  • I. Use the Dockerfile to perform the deployment on a new AWS Elastic Beanstalk applicatio
  • J. S
  • K. Create an AWS CloudFormation template which creates an instance with the AWS::EC2::lnstance resource type and an AMI with Docker pre-installe
  • L. With UserData, install Git to download the Node.js application and then set it up.

Answer: CD

Explanation:
Option A is invalid because there is no "awsgitpush" command
Option B is invalid because there is no AWS::CC2::Container resource type.
Clastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run.
For more information on Docker and Clastic beanstalk please refer to the below link:
◆ http://docs.aws.a mazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html
When you launch an instance in Amazon CC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon CC2: shell scripts and cloud- init directives. You can also pass this data into the launch wizard as plain text, as a file (this is useful for launching instances using the command line tools), or as base64-encoded text (for API calls). For more information on Cc2 User data please refer to the below link:
• http://docs.aws.a mazon.com/AWSCC2/latest/UserGuide/user-data. htm I
Note: "git aws.push" with CB CLI 2.x - see a forum thread at https://forums.aws.amazon.com/thread.jspa7messageID=583202#jive-message-582979. Basically, this is a predecessor to the newer "eb deploy" command in CB CLI 31. This question kept in order to be consistent with exam.

NEW QUESTION 18
Which of the following file needs to be included along with your source code binaries when your application uses the EC2/On-Premises compute platform, and deploy it using the AWS Code Deploy service.

  • A. appspec.yml
  • B. appconfig.yml
  • C. appspecjson
  • D. appconfigjson

Answer: A

Explanation:
The AWS Documentation mentions the below
The application specification file (AppSpec file) is a YAML-formatted file used by AWS CodeDeploy to determine:
what it should install onto your instances from your application revision in Amazon S3 or GitHub. which lifecycle event hooks to run in response to deployment lifecycle events. An AppSpec file must be named appspec.yml and it must be placed in the root of an application's source code's directory structure. Otherwise, deployments will fail. For more information on the appspec file, please visit the below URL:
http://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file.html
Note: If you deploying your code on AWS Lambda compute platform. An AppSpec file can be YAML- formatted or JSON-for matted. You can also enter the contents of an AppSpec file directly into AWS CodeDeploy console when you create a deployment.
However, this question is about along with your source code binaries on an CC2/On-Premises Compute Platform. So, an AppSpec file must be a YAM L-formatted file named appspec.yml and it must be placed in the root of the directory structure of an application's source code. Otherwise, deployments fail.

NEW QUESTION 19
You work for a company that has multiple applications which are very different and built on different programming languages. How can you deploy applications as quickly as possible?

  • A. Develop each app in one Docker container and deploy using ElasticBeanstalk
  • B. Create a Lambda function deployment package consisting of code and any dependencies
  • C. Develop each app in a separate Docker container and deploy using Elastic Beanstalk V
  • D. Develop each app in a separate Docker containers and deploy using CloudFormation

Answer: C

Explanation:
Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You
can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other
platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run.
Option A is an efficient way to use Docker. The entire idea of Docker is that you have a separate environment for various applications.
Option B is ideally used to running code and not packaging the applications and dependencies Option D is not ideal deploying Docker containers using Cloudformation
For more information on Docker and Clastic Beanstalk, please visit the below URL:
◆ http://docs.aws.a mazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html

NEW QUESTION 20
You have a complex system that involves networking, 1AM policies, and multiple, three-tier applications. You are still receiving requirements for the new system, so you don't yet know how many AWS components will be present in the final design. You want to start using AWS CloudFormation to define these AWS resources so that you can automate and version-control your infrastructure. How would you use AWS CloudFormation to provide agile new environments for your customers in a cost-effective, reliable manner?

  • A. Manually create one template to encompass all the resources that you need for the system, so you only have a single template to version-control.
  • B. Create multiple separate templates for each logical part of the system, create nested stacks in AWS CloudFormation, and maintain several templates to version-contro
  • C. •>/
  • D. Create multiple separate templates for each logical part of the system, and provide the outputs from one to the next using an Amazon Elastic Compute Cloud (EC2) instance running the SDK forfinergranularity of control.
  • E. Manually construct the networking layer using Amazon Virtual Private Cloud (VPC) because this does not change often, and then use AWS CloudFormation to define all other ephemeral resources.

Answer: B

Explanation:
As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single, unified stack. Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS::CloudFormation::Stackresource in your template to reference other templates.
For more information on Cloudformation best practises please refer to the below link: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/best-practices.html

NEW QUESTION 21
You have instances running on your VPC. You have both production and development based instances running in the VPC. You want to ensure that people who are responsible for the development instances don't have the access to work on the production instances to ensure better security. Using policies, which of the following would be the best way to accomplish this? Choose the correct answer from the options given below

  • A. Launchthe test and production instances in separate VPC's and use VPC peering
  • B. Createan 1AM policy with a condition which allows access to only instances that areused for production or development
  • C. Launchthe test and production instances in different Availability Zones and use MultiFactor Authentication
  • D. Definethe tags on the test and production servers and add a condition to the lAMpolicy which allows access to specific tags

Answer: D

Explanation:
You can easily add tags which define which instances are production and which are development instances and then ensure these tags are used when controlling access via an 1AM policy.
For more information on tagging your resources, please refer to the below link: http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/Using_Tags.html

NEW QUESTION 22
You have been requested to use CloudFormation to maintain version control and achieve automation for the applications in your organization. How can you best use CloudFormation to keep everything agile and maintain multiple environments while keeping cost down?

  • A. Create separate templates based on functionality, create nested stacks with CloudFormation.
  • B. Use CloudFormation custom resources to handle dependencies between stacks
  • C. Create multiple templates in one CloudFormation stack.
  • D. Combine all resources into one template for version control and automation.

Answer: A

Explanation:
As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single, unified stack. Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS:: Cloud Form ation::Stackresource in your template to reference
other templates. For more information on Cloudformation best practises please refer to the below link:
http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/best-practices.html

NEW QUESTION 23
......

100% Valid and Newest Version DOP-C01 Questions & Answers shared by DumpSolutions.com, Get Full Dumps HERE: https://www.dumpsolutions.com/DOP-C01-dumps/ (New 116 Q&As)