aiotestking uk

DOP-C01 Exam Questions - Online Test


DOP-C01 Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

Top Quality of DOP-C01 simulations materials and testing software for Amazon-Web-Services certification for client, Real Success Guaranteed with Updated DOP-C01 pdf dumps vce Materials. 100% PASS AWS Certified DevOps Engineer- Professional exam Today!

Amazon-Web-Services DOP-C01 Free Dumps Questions Online, Read and Test Now.

NEW QUESTION 1
When you add lifecycle hooks to an Autoscaling Group, what are the wait states that occur during the scale in and scale out process. Choose 2 answers from the options given below

  • A. Launching:Wait
  • B. Exiting:Wait
  • C. Pending:Wait
  • D. Terminating:Wait

Answer: CD

Explanation:
The AWS Documentation mentions the following
After you add lifecycle hooks to your Auto Scaling group, they work as follows:
1. Auto Scaling responds to scale out events by launching instances and scale in events by terminating instances.
2. Auto Scaling puts the instance into a wait state (Pending:Wait orTerminating: Wait). The instance is paused until either you tell Auto Scaling to continue or the timeout period ends.
For more information on Autoscaling Lifecycle hooks, please visit the below URL: • http://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.htmI

NEW QUESTION 2
An application is currently writing a large number of records to a DynamoDB table in one region. There is a requirement for a secondary application tojust take in the changes to the DynamoDB table every 2 hours and process the updates accordingly. Which of the following is an ideal way to ensure the secondary application can get the relevant changes from the DynamoDB table.

  • A. Inserta timestamp for each record and then scan the entire table for the timestamp asper the last 2 hours.
  • B. Createanother DynamoDB table with the records modified in the last 2 hours.
  • C. UseDynamoDB streams to monitor the changes in the DynamoDB table.
  • D. Transferthe records to S3 which were modified in the last 2 hours

Answer: C

Explanation:
The AWS Documentation mentions the following
A DynamoDB stream is an ordered flow of information about changes to items in an Amazon DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table.
Whenever an application creates, updates, or deletes items in the table, DynamoDB Streams writes a stream record with the primary key attribute(s) of the items that were modified. Astream record contains information about a data modification to a single item in a DynamoDB table. You can configure the stream so that the stream records capture additional information, such as the "before" and "after" images of modified items.
For more information on DynamoDB streams, please visit the below URL: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html

NEW QUESTION 3
You need to deploy a multi-container Docker environment on to Elastic beanstalk. Which of the following files can be used to deploy a set of Docker containers to Elastic beanstalk

  • A. Dockerfile
  • B. DockerMultifile
  • C. Dockerrun.aws.json
  • D. Dockerrun

Answer: C

Explanation:
The AWS Documentation specifies
A Dockerrun.aws.json file is an Clastic Beanstalk-specific JSON file that describes how to deploy a set of Docker containers as an Clastic Beanstalk application. You can use aDockerrun.aws.json file for a multicontainer Docker environment.
Dockerrun.aws.json describes the containers to deploy to each container instance in the environment as well as the data volumes to create on the host instance for the containers to mount.
For more information on this, please visit the below URL:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html

NEW QUESTION 4
You have a web application composed of an Auto Scaling group of web servers behind a load balancer, and create a new AMI for each application version for deployment. You have a new version to release, and you want to use the Blue-Green deployment technique to migrate users over in a controlled manner while the size of the fleet remains constant over a period of 6 hours, to ensure that the new version is performing well. What option should you choose to enable this technique while being able to roll back easily? Choose 2 answers from the options given below. Each answer presents part of the solution

  • A. Createan Auto Scaling launch configuration with the new AMI to use the new launchconfiguration and to register instances with the new load balancer S
  • B. Createan Auto Scaling launch configuration with the new AMI to use the new launchconfiguration and to register instances with the existing load balancer
  • C. UseAmazon RouteS3 weighted Round Robin to varythe proportion of requests sent tothe load balancer
  • D. -^
  • E. ConfigureElastic Load Balancing to varythe proportion of requests sent to instancesrunningthe two application versions.

Answer: AC

Explanation:
The AWS documentation gives this example of a Blue Green deployment
DOP-C01 dumps exhibit
You can shift traffic all at once or you can do a weighted distribution. With Amazon Route 53, you can define a percentage of traffic to go to the green environment and gradually update the weights until the green environment carries the full production traffic. A weighted distribution provides the ability to perform canary analysis where a small percentage of production traffic is introduced to a new environment. You can test the new code and monitor for errors, limiting the blast radius if any issues are encountered. It also allows the green environment to scale out to support the full production load if you're using Clastic Load Balancing, for example
For more information on Blue Green deployments, please refer to the below link:
• https://dOawsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

NEW QUESTION 5
You have implemented a system to automate deployments of your configuration and application dynamically after an Amazon EC2 instance in an Auto Scaling group is launched. Your system uses a configuration management tool that works in a standalone configuration, where there is no master node. Due to the volatility of application load, new instances must be brought into service within three minutes of the launch of the instance operating system. The deployment stages take the following times to complete:
1) Installing configuration management agent: 2mins
2) Configuring instance using artifacts: 4mins
3) Installing application framework: 15mins
4) Deploying application code: 1min
What process should you use to automate the deployment using this type of standalone agent configuration?

  • A. Configureyour Auto Scaling launch configuration with an Amazon EC2 UserData script toinstall the agent, pull configuration artifacts and application code from anAmazon S3 bucket, and then execute the agent to configure the infrastructureand application.
  • B. Builda custom Amazon Machine Image that includes all components pre-installed,including an agent, configuration artifacts, application frameworks, and code.Create a startup script that executes the agent to configure the system onstartu
  • C. *t
  • D. Builda custom Amazon Machine Image that includes the configuration management agentand application framework pre-installed.Configure your Auto Scaling launchconfiguration with an Amazon EC2 UserData script to pull configurationartifacts and application code from an Amazon S3 bucket, and then execute theagent toconfigure the system.
  • E. Createa web service that polls the Amazon EC2 API to check for new instances that arelaunched in an Auto Scaling grou
  • F. When it recognizes a new instance, execute aremote script via SSH to install the agent, SCP the configuration artifacts andapplication code, and finally execute the agent to configure the system

Answer: B

Explanation:
Since the new instances need to be brought up in 3 minutes, hence the best option is to pre-bake all the components into an AMI. If you try to user the User Data option, it will just take time, based on the time mentioned in the question to install and configure the various components.
For more information on AMI design please see the below link:
• https://aws.amazon.com/answers/configuration-management/aws-ami-design/

NEW QUESTION 6
As part of your continuous deployment process, your application undergoes an I/O load performance test before it is deployed to production using new AMIs. The application uses one Amazon EBS PIOPS volume per instance and requires consistent I/O performance.
Which of the following must be carried out to ensure that I/O load performance tests yield the correct results in a repeatable manner?

  • A. Ensurethat the I/O block sizes for the test are randomly selected.
  • B. Ensurethat the Amazon EBS volumes have been pre-warmed by reading all the blocksbefore the test.
  • C. Ensurethat snapshots of the Amazon EBS volumes are created as a backup.
  • D. Ensurethat the Amazon EBS volume is encrypted.

Answer: B

Explanation:
Since the AMI will get all the data from S3 as snapshots, always ensure the volume prewarmed before it is set for the load test.
For more information on benchmarking procedures please see the below link:
• hrtp://docs^ws.amazon.com/AWSCC2/latest/UserGuide/berK;hmark_prooedures.html

NEW QUESTION 7
Which of the below is not a lifecycle event in Opswork?

  • A. Setup
  • B. Uninstall
  • C. Configure
  • D. Shutdown

Answer: B

Explanation:
Below are the Lifecycle events of Opsstack
1) Setup - This event occurs after a started instance has finished booting.
2) Configure - This event occurs on all of the stack's instances when one of the following occurs:
a) An instance enters or leaves the online state.
b) You associate an Clastic IP address with an instance or disassociate one from an instance.
c) You attach an Clastic Load Balancing load balancer to a layer, or detach one from a layer.
3) Deploy - This event occurs when you run a Deploy command, typically to deploy an application to a set of application server instances.
4) Undeploy - This event occurs when you delete an app or run an Undeploy command to remove an app from a set of application server instances.
5) Shutdown - This event occurs after you direct AWS Ops Works Stacks to shut an instance down but before the associated Amazon CC2 instance is actually terminated
For more information on Opswork lifecycle events, please visit the below URL:
• http://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-events.htm I

NEW QUESTION 8
Your company has an application hosted in AWS which makes use of DynamoDB. There is a requirement from the IT security department to ensure that all source IP addresses which make calls to the DynamoDB tables are recorded. Which of the following services can be used to ensure this requirement is fulfilled.

  • A. AWSCode Commit
  • B. AWSCode Pipeline
  • C. AWSCIoudTrail
  • D. AWSCIoudwatch

Answer: C

Explanation:
The AWS Documentation mentions the following
DynamoDB is integrated with CloudTrail, a service that captures low-level API requests made by or
on behalf of DynamoDB in your AWS account and delivers the log files to an Amazon S3 bucket that you specify. CloudTrail captures calls made from the DynamoDB console or from the DynamoDB low-level API. Using the information collected by CloudTrail, you can determine what request was made to DynamoDB, the source IP address from which the request was made, who made the request, when it was made, and so on.
For more information on DynamoDB and Cloudtrail, please refer to the below link:
• http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/logging-using- cloudtrail.htmI

NEW QUESTION 9
Which of the following Cache Engines does Opswork have built in support for?

  • A. Redis
  • B. Memcache
  • C. Both Redis and Memcache
  • D. There is no built in support as of yet for any cache engine

Answer: B

Explanation:
The AWS Documentation mentions
AWS OpsWorks Stacks provides built-in support for Memcached. However, if Redis better suits your requirements, you can customize your stack so that your application servers use OastiCache Redis. Although it works with Redis clusters, AWS clearly specifies that AWS Opsworks stacks provide built in support for Memcached.
Amazon OastiCache is an AWS service that makes it easy to provide caching support for your application server, using either the Memcached or Redis caching engines. OastiCache can be used to improve the application server performance running on AWS Opsworks stacks.
For more information on Opswork and Cache engines please refer to the below link:
• http://docs^ws.a mazon.com/opsworks/latest/userguide/other-se rvices-redis.htm I

NEW QUESTION 10
Your development team wants account-level access to production instances in order to do live debugging of a highly secure environment. Which of the following should you do?

  • A. Place the credentials provided by Amazon Elastic Compute Cloud (EC2) into a secure Amazon Sample Storage Service (S3) bucket with encryption enable
  • B. Assign AWS Identity and Access Management (1AM) users to each developer so they can download the credentials file.
  • C. Place an internally created private key into a secure S3 bucket with server-side encryption using customer keys andconfiguration management, create a service account on al I the instances using this private key, and assign I AM users to each developer so they can download the fi le.
  • D. Place each developer's own public key into a private S3 bucket, use instance profiles and configuration management to create a user account for each developer on all instances, and place the user's public keys into the appropriate accoun
  • E. ^/
  • F. Place the credentials provided by Amazon EC2 onto an MFA encrypted USB drive, and physically share it with each developer so that the private key never leaves the office.

Answer: C

Explanation:
An instance profile is a container for an 1AM role that you can use to pass role information to an CC2 instance when the instance starts.
A private S3 bucket can be created for each developer, the keys can be stored in the bucket and then assigned to the instance profile.
Option A and D are invalid, because the credentials should not be provided by a AWS EC2 Instance. Option B is invalid because you would not create a service account, instead you should create an instance profile.
For more information on Instance profiles, please refer to the below document link: from AWS
• http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-ro le-ec2_instance- profiles.htm I

NEW QUESTION 11
Your company has an application sitting on EC2 Instances behind an Elastic Load balancer. The EC2 Instances are being launched as part of an Autoscaling Group. Which of the following is an example of Blue Green Deployments in AWS?

  • A. Usea Cloudformation stack to deploy your resource
  • B. Use 2 Cloudformation stacks.Whenever you want to switch over, deploy and use the resources in the secondCloudformation stack.
  • C. Usethe Elastic beanstalk service to deploy your resource
  • D. Use 2 Elastic beanstalkenvironments.Use Rolling deployments to switch between the environments.
  • E. Re-deployyour application behind a load balancer that uses Auto Scaling groups, create anew identical Auto Scaling group, and associate it to the load balance
  • F. Duringdeployment, set the desired number of instances on the old Auto Scalinggroupto zero, and when all instances have terminated, delete the old Auto Scalinggroup.
  • G. Usethe Opsworks sen/ice to deploy your resource
  • H. Use 2 Opswork layers to deploy 2versions of your applicatio
  • I. When the time comes for the switch, change to thealternate layer in the Opswork stack

Answer: C

Explanation:
This deployment technique is given in the AWS Whitepaper
DOP-C01 dumps exhibit
As you scale up the green Auto Scaling group, you can take blue Auto Scaling group instances out of service by either terminating them or putting them in Standby state. Standby is a good option because if you need to roll back to the blue environment, you only have to put your blue server instances back in service and they're ready to go.14 As soon as the green group is scaled up without issues, you can decommission the blue group by adjusting the group size to zero. If you need to roll back, detach the load balancer from the green group or reduce the group size of the green group to zero. For more information on Blue Green deployments, please visit the below URL:
• https://dOawsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

NEW QUESTION 12
You have ana video processing application hosted in AWS. The video's are uploaded by users onto the site. You have a program that is custom built to process those videos. The program is able to recover incase there are any failures when processing the videos. Which of the following mechanisms can be used to deploy the instances for carrying out the video processing activities, ensuring that the cost is kept at a minimum.

  • A. Create a launch configuration with Reserved Instance
  • B. Ensure the User Data section details the installation of the custom softwar
  • C. Create an Autoscalinggroup with the launch configuration.
  • D. Create a launch configuration with Spot Instance
  • E. Ensure the User Data section details the installation of the custom softwar
  • F. Create an Autoscalinggroupwith the launch configuration.
  • G. Create a launch configuration with Dedicated Instance
  • H. Ensure the User Data section details the installation of the custom softwar
  • I. Create an Autoscaling group with the launch configuration.
  • J. Create a launch configuration with On-Demand Instance
  • K. Ensure the User Data section details the installation of the custom softwar
  • L. Create an Autoscaling group with the launch configuration.

Answer: B

Explanation:
Since the application can recover from failures and cost is the priority, then Spot instances are the best bet for this requirement. The launch configuration has the
facility to request for Spot Instances.
The below snapshot from the Launch configuration section shows that Spot Instances can be used for AutoScaling Groups.
DOP-C01 dumps exhibit
For more information on Spot Instances and Autoscaling, please visit the below URL:
• http://docs^ws.amazon.com/autoscaling/latest/userguide/US-Spotlnstances.html

NEW QUESTION 13
Your company has a set of EC2 Instances that access data objects stored in an S3 bucket. Your IT Security department is concerned about the security of this arhitecture and wants you to implement the following
1) Ensure that the EC2 Instance securely accesses the data objects stored in the S3 bucket
2) Ensure that the integrity of the objects stored in S3 is maintained.
Which of the following would help fulfil the requirements of the IT Security department. Choose 2 answers from the options given below

  • A. Createan IAM user and ensure the EC2 Instances uses the IAM user credentials toaccess the data in the bucket.
  • B. Createan IAM Role and ensure the EC2 Instances uses the IAM Role to access the datain the bucket.
  • C. UseS3 Cross Region replication to replicate the objects so that the integrity ofdata is maintained.
  • D. Usean S3 bucket policy that ensures that MFA Delete is set on the objects in thebucket

Answer: BD

Explanation:
The AWS Documentation mentions the following
I AM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using 1AM roles
For more information on 1AM Roles, please refer to the below link:
• http://docs.aws.a mazon.com/AWSCC2/latest/UserGuide/iam-roles-for-amazon-ec2. htmI
MFS Delete can be used to add another layer of security to S3 Objects to prevent accidental deletion of objects. For more information on MFA Delete, please refer to the below link:
• https://aws.amazon.com/blogs/security/securing-access-to-aws-using-mfa-part-3/

NEW QUESTION 14
The project you are working on currently uses a single AWS CloudFormation template to deploy its AWS infrastructure, which supports a multi-tier web application. You have been tasked with organizing the AWS CloudFormation resources so that they can be maintained in the future, and so that different departments such as Networking and Security can review the architecture before it goes to Production. How should you do this in a way that accommodates each department, using their existing workflows?

  • A. Organize the AWS CloudFormation template so that related resources are next to each other in the template, such as VPC subnets and routing rules for Networkingand security groups and 1AM information for Security.
  • B. Separate the AWS CloudFormation template into a nested structure that has individual templates for the resources that are to be governed by different departments, and use the outputs from the networking and security stacks for the application template that you contro
  • C. ^/
  • D. Organize the AWS CloudFormation template so that related resources are next to each other in the template for each department's use, leverage your existing continuous integration tool to constantly deploy changes from all parties to the Production environment, and then run tests for validation.
  • E. Use a custom application and the AWS SDK to replicate the resources defined in the current AWS CloudFormation template, and use the existing code review system to allow other departments to approve changes before altering the application for future deployments.

Answer: B

Explanation:
As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single, unified stack. Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS:: Cloud Form ation::Stackresource in your template to reference other templates.
For more information on best practices for Cloudformation please refer to the below link: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/best-practices.html

NEW QUESTION 15
Which of the following is a reliable and durable logging solution to track changes made to your AWS resources?

  • A. Createa new CloudTrail trail with one new S3 bucket to store the logs and with theglobal services option selecte
  • B. Use 1AM roles S3 bucket policies and MultiFactor Authentication (MFA) Delete on the S3 bucket that stores your log
  • C. V
  • D. Createa new CloudTrail with one new S3 bucket to store the log
  • E. Configure SNS tosend log file delivery notifications to your management syste
  • F. Use 1AM rolesand S3 bucket policies on the S3 bucket that stores your logs.
  • G. Createa new CloudTrail trail with an existing S3 bucket to store the logs and withthe global services option selecte
  • H. Use S3 ACLs and Multi FactorAuthentication (M FA) Delete on the S3 bucket that stores your logs.
  • I. Createthree new CloudTrail trails with three new S3 buckets to store the logs one forthe AWS Management console, one for AWS SDKs and one for command line tools.Use 1AM roles and S3 bucket policies on the S3 buckets that store your logs.

Answer: A

Explanation:
AWS Identity and Access Management (1AM) is integrated with AWS CloudTrail, a sen/ice that logs AWS events made by or on behalf of your AWS account. CloudTrail logs authenticated AWS API calls and also AWS sign-in events, and collects this event information in files that are delivered to Amazon S3 buckets. You need to ensure that all services are included. Hence option B is partially correct.
Option B and D is wrong because it just adds an overhead for having 3 S3 buckets and SNS notifications.
For more information on Cloudtrail, please visit the below URL:
• http://docs.aws.a mazon.com/IAM/latest/UserGuide/cloudtrail-integration.htm I

NEW QUESTION 16
A custom script needs to be passed to a new Amazon Linux instances created in your Auto Scalinggroup. Which feature allows you to accomplish this?

  • A. User data
  • B. EC2Config service
  • C. 1AM roles
  • D. AWSConfig

Answer: A

Explanation:
When you configure an instance during creation, you can add custom scripts to the User data section. So in Step 3 of creating an instance, in the Advanced Details section, we can enter custom scripts in the User Data section. The below script installs Perl during the instance creation of the CC2 instance.
DOP-C01 dumps exhibit
For more information on user data please refer to the URL:
• http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/ec2-i nstance-metadata.htm I

NEW QUESTION 17
You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it's very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO?

  • A. Copy all log files into AWS S3 using a cron job on each instanc
  • B. Use an S3 Notification Configuration on the PutBucket event and publish events to AWS Lambd
  • C. Use the Lambda to analyze logs as soon as they come in and flag issues.
  • D. Begin using CloudWatch Logs on every servic
  • E. Stream all Log Groups into S3 object
  • F. Use AWS EMR clusterjobs to perform adhoc MapReduce analysis and write new queries when needed.
  • G. Copy all log files into AWS S3 using a cron job on each instanc
  • H. Use an S3 Notification Configuration on the PutBucket event and publish events to AWS Kinesi
  • I. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues.
  • J. Begin using CloudWatch Logs on every servic
  • K. Stream all Log Groups into an AWS Elastic search Service Domain running Kibana 4 and perform log analysis on a search cluster.

Answer: D

Explanation:
Amazon Dasticsearch Service makes it easy to deploy, operate, and scale dasticsearch for log analytics, full text search, application monitoring, and more. Amazon
Oasticsearch Service is a fully managed service that delivers Dasticsearch's easy-to-use APIs and real- time capabilities along with the availability, scalability, and security required by production workloads. The service offers built-in integrations with Kibana, Logstash, and AWS services including Amazon Kinesis Firehose, AWS Lambda, and Amazon Cloud Watch so that you can go from raw data to actionable insights quickly. For more information on Elastic Search, please refer to the below link:
• https://aws.amazon.com/elasticsearch-service/

NEW QUESTION 18
You have a set of EC2 Instances running behind an ELB. These EC2 Instances are launched via an Autoscaling Group. There is a requirement to ensure that the logs from the server are stored in a durable storage layer. This is so that log data can be analyzed by staff in the future. Which of the following steps can be implemented to ensure this requirement is fulfilled. Choose 2 answers from the options given below

  • A. Onthe web servers, create a scheduled task that executes a script to rotate andtransmit the logs to an Amazon S3 bucke
  • B. */
  • C. UseAWS Data Pipeline to move log data from the Amazon S3 bucket to Amazon Redshiftin order to process and run reports V
  • D. Onthe web servers, create a scheduled task that executes a script to rotate andtransmit the logs to Amazon Glacier.
  • E. UseAWS Data Pipeline to move log data from the Amazon S3 bucket to Amazon SQS inorder to process and run reports

Answer: AB

Explanation:
Amazon S3 is the perfect option for durable storage. The AWS Documentation mentions the following on S3 Storage Amazon Simple Storage Service (Amazon S3) makes it simple and practical to collect, store, and analyze data - regardless of format - all at massive scale. S3 is object storage built to store and retrieve any amount of data from anywhere - web sites and mobile apps, corporate applications, and data from loT sensors or devices.
For more information on Amazon S3, please refer to the below URL:
• https://aws.amazon.com/s3/
Amazon Redshift is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (Bl) tools. It allows you to run complex analytic queries against petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance local disks, and massively parallel query execution. Most results come back in seconds. For more information on Amazon Redshift, please refer to the below URL:
• https://aws.amazon.com/redshift/

NEW QUESTION 19
Which of the following services can be used to detect the application health in a Blue Green deployment in A WS.

  • A. AWSCode Commit
  • B. AWSCode Pipeline
  • C. AWSCIoudTrail
  • D. AWSCIoudwatch

Answer: D

Explanation:
The AWS Documentation mentions the following
Amazon Cloud Watch is a monitoring sen/ice for AWS Cloud resources and the applications you run on AWS.9 CloudWatch can collect and track metrics, collect and monitor log files, and set alarms. It provides system-wide visibility into resource utilization, application performance, and operational health, which are key to early detection of application health in blue/green deployments.
For more information on Blue Green deployments, please refer to the below link:
• https://dOawsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

NEW QUESTION 20
You are administering a continuous integration application that polls version control for changes and then launches new Amazon EC2 instances for a full suite of build tests. What should you do to ensure the lowest overall cost while being able to run as many tests in parallel as possible?

  • A. Perform syntax checking on the continuous integration system before launching a new Amazon EC2 instance for build test, unit and integration tests.
  • B. Perform syntax and build tests on the continuous integration system before launching the newAmazon EC2 instance unit and integration test
  • C. Perform all tests on the continuous integration system, using AWS OpsWorks for unit, integration, and build tests.
  • D. Perform syntax checking on the continuous integration system before launching a new AWS Data Pipeline for coordinating the output of unit, integration, and build tests.

Answer: B

Explanation:
Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.
Option A and D are invalid because you can do build tests on a CI system and not only Syntax tests. And Syntax tests are normally done during coding time and not during the build time.
Option C is invalid because Opswork is ideally not used for build and integration tests.
For an example of a Continuous integration system, please refer to the Jenkins system via the url below
• https://jenkins.io/

NEW QUESTION 21
You have a web application running on six Amazon EC2 instances, consuming about 45% of resources on each instance. You are using auto-scaling to make sure that six instances are running at all times. The number of requests this application processes is consistent and does not experience spikes. The application is critical to your business and you want high availability at all times. You want the load to be distributed evenly between all instances. You also want to use the same Amazon Machine Image (AMI) for all instances. Which of the following architectural choices should you make?

  • A. Deploy6 EC2 instances in one availability zone and use Amazon Elastic Load Balancer.
  • B. Deploy3 EC2 instances in one region and 3 in another region and use Amazon ElasticLoad Balancer.
  • C. Deploy3 EC2 instances in one availability zone and 3 in another availability zone anduse Amazon Elastic Load Balancer.
  • D. Deploy2 EC2 instances in three regions and use Amazon Elastic Load Balancer.

Answer: C

Explanation:
Option A is automatically incorrect because remember that the question asks for high availability. For option A, if the A2 goes down then the entire application fails.
For Option B and D, the CLB is designed to only run in one region in aws and not across multiple regions. So these options are wrong.
The right option is C.
The below example shows an Elastic Loadbalancer connected to 2 EC2 instances connected via Auto Scaling. This is an example of an elastic and scalable web tier.
By scalable we mean that the Auto scaling process will increase or decrease the number of CC2 instances as required.
DOP-C01 dumps exhibit
For more information on best practices for AWS Cloud applications, please visit the below URL:
• https://d03wsstatic.com/whitepapers/AWS_Cloud_Best_Practices.pdf

NEW QUESTION 22
Your security officer has told you that you need to tighten up the logging of all events that occur on your AWS account. He wants to be able to access all events that occur on the account across all regions quickly and in the simplest way possible. He also wants to make sure he is the only person that has access to these events in the most secure way possible. Which of the following would be the best solution to assure his requirements are met? Choose the correct answer from the options below

  • A. Use CloudTrail to logall events to one S3 bucke
  • B. Make this S3 bucket only accessible by your security officer with a bucket policy that restricts access to his user only and also add MFA to the policy for a further level of securit
  • C. ^/
  • D. Use CloudTrail to log all events to an Amazon Glacier Vaul
  • E. Make sure the vault access policy only grants access to the security officer's IP address.
  • F. Use CloudTrail to send all API calls to CloudWatch and send an email to the security officer every time an API call is mad
  • G. Make sure the emails are encrypted.
  • H. Use CloudTrail to log all events to a separate S3 bucket in each region as CloudTrail cannot write to a bucket in a different regio
  • I. Use MFA and bucket policies on all the different buckets.

Answer: A

Explanation:
AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log,
continuously monitor, and retain events related to API calls across your AWS infrastructure. CloudTrail provides a history of AWS API calls for your account, including API calls made through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This history simplifies security analysis, resource change tracking, and troubleshooting.
You can design cloudtrail to send all logs to a central S3 bucket. For more information on cloudtrail, please visit the below URL:
◆ https://aws.amazon.com/cloudtrail/

NEW QUESTION 23
......

100% Valid and Newest Version DOP-C01 Questions & Answers shared by Certshared, Get Full Dumps HERE: https://www.certshared.com/exam/DOP-C01/ (New 116 Q&As)