SAA-C03 Exam Questions - Online Test
SAA-C03 Premium VCE File
Learn More
100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours
Your success in Amazon-Web-Services SAA-C03 is our sole target and we develop all our SAA-C03 braindumps in a way that facilitates the attainment of this target. Not only is our SAA-C03 study material the best you can find, it is also the most detailed and the most updated. SAA-C03 Practice Exams for Amazon-Web-Services SAA-C03 are written to the highest standards of technical accuracy.
Free SAA-C03 Demo Online For Amazon-Web-Services Certifitcation:
NEW QUESTION 1
A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the database by using user names and passwords that are stored locally in a file. The company wants to minimize the operational overhead of credential management.
What should a solutions architect do to accomplish this goal?
- A. Use AWS Secrets Manage
- B. Turn on automatic rotation.
- C. Use AWS Systems Manager Parameter Stor
- D. Turn on automatic rotation.
- E. Create an Amazon S3 bucket lo store objects that are encrypted with an AWS Key
- F. Management Service (AWS KMS) encryption ke
- G. Migrate the credential file to the S3 bucke
- H. Point the application to the S3 bucket.
- I. Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume (or each EC2 instanc
- J. Attach the new EBS volume to each EC2 instanc
- K. Migrate the credential file to the new EBS volum
- L. Point the application to the new EBS volume.
Answer: B
NEW QUESTION 2
A company's order system sends requests from clients to Amazon EC2 instances The EC2 instances process the orders and then store the orders in a database on Amazon RDS. Users report that they must reprocess orders when the system fails. The company wants a resilient solution that can process orders automatically if a system outage occurs.
What should a solutions architect do to meet these requirements?
- A. Move the EC2 instances Into an Auto Scaling grou
- B. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to target an Amazon Elastic Container Service (Amazon ECS) task
- C. Move the EC2 instances into an Auto Seating group behind an Application Load Balancer (Al B) Update the order system to send message to the ALB endpoint
- D. Move the EC2 instances into an Auto Scaling grou
- E. Configure the order system to send messages to an Amazon Simple Queue Service (Amazon SGS) queu
- F. Configure the EC2 instances to consume messages from the queue.
- G. Create an Amazon Simple Notification Service (Amazon SNS) topi
- H. Create an AWS Lambda function, and subscribe the function to the SNS topic Configure (he order system to send messages to the SNS topi
- I. Send a command to the EC2 instances to process the messages by using AWS Systems Manager Run Command
Answer: C
NEW QUESTION 3
Availability Zone The company wants the application to be highly available with minimum downtime and minimum loss of data
Which solution will meet these requirements with the LEAST operational effort?
- A. Place the EC2 instances in different AWS Regions Use Amazon Route 53 health checks to redirect traffic Use Aurora PostgreSQL Cross-Region Replication
- B. Configure the Auto Scaling group to use multiple Availability Zones Configure the database as Multi-AZ Configure an Amazon RDS Proxy instance for the database
- C. Configure the Auto Scaling group to use one Availability Zone Generate hourly snapshots of the database Recover the database from the snapshots in the event of a failure.
- D. Configure the Auto Scaling group to use multiple AWS Regions Write the data from the application to Amazon S3 Use S3 Event Notifications to launch an AWS Lambda function to write the data to the database
Answer: B
NEW QUESTION 4
A company is building a containerized application on premises and decides to move the application to AWS. The application will have thousands of users soon after li is deployed. The company Is unsure how to manage the deployment of containers at scale. The company needs to deploy the containerized application in a highly available architecture that minimizes operational overhead.
Which solution will meet these requirements?
- A. Store container images In an Amazon Elastic Container Registry (Amazon ECR) repositor
- B. Use an Amazon Elastic Container Service (Amazon ECS) cluster with the AWS Fargate launch type to run the container
- C. Use target tracking to scale automatically based on demand.
- D. Store container images in an Amazon Elastic Container Registry (Amazon ECR) repositor
- E. Use an Amazon Elastic Container Service (Amazon ECS) cluster with the Amazon EC2 launch type to run the container
- F. Use target tracking to scale automatically based on demand.
- G. Store container images in a repository that runs on an Amazon EC2 instanc
- H. Run the containers on EC2 instances that are spread across multiple Availability Zone
- I. Monitor the average CPU utilization in Amazon CloudWatc
- J. Launch new EC2 instances as needed
- K. Create an Amazon EC2 Amazon Machine Image (AMI) that contains the container image Launch EC2 Instances in an Auto Scaling group across multiple Availability Zone
- L. Use an Amazon CloudWatch alarm to scale out EC2 instances when the average CPU utilization threshold is breached.
Answer: A
NEW QUESTION 5
A company wants to manage Amazon Machine Images (AMls). The company currently copies AMls to the same AWS Region where the AMls were created. The company needs to design an application that captures AWS API calls and sends alerts whenever the Amazon EC2 Createlmage API operation is called within the company's account
Which solution will meet these requirements with the LEAST operational overhead?
- A. Create an AWS Lambda function to query AWS CloudTrail logs and to send an alert when a Createlmage API call is detected
- B. Configure AWS CloudTrail with an Amazon Simple Notification Sen/ice (Amazon SNS) notification that occurs when updated logs are sent to Amazon S3 Use Amazon Athena to create a new table and to query on Createlmage when an API call is detected
- C. Create an Amazon EventBndge (Amazon CloudWatch Events) rule for the Createlmage API call Configure the target as an Amazon Simple Notification Service (Amazon SNS) topic to send an alert when a Createlmage API call is detected
- D. Configure an Amazon Simple Queue Service (Amazon SQS) FIFO queue as a target for AWS CloudTrail logs Create an AWS Lambda function to send an alert to an Amazon Simple Notification Service (Amazon SNS) topic when a Createlmage API call is detected
Answer: B
NEW QUESTION 6
A global company hosts its web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The web application has static data and dynamic data. The company stores its static data in an Amazon S3 bucket. The company wants to improve performance and reduce latency for the static data and dynamic data. The company is using its own domain name registered with Amazon Route 53.
What should a solutions architect do to meet these requirements?
- A. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins Configure Route 53 to route traffic to the CloudFront distribution.
- B. Create an Amazon CloudFront distribution that has the ALB as an origin Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoin
- C. Configure Route 53 to route traffic to the CloudFront distribution.
- D. Create an Amazon CloudFront distribution that has the S3 bucket as an origin Create an AWS Global Accelerator standard accelerator that has the ALB and the CloudFront distribution as endpoints Create a custom domain name that points to the accelerator DNS name Use the custom domain name as an endpoint for the web application.
- E. Create an Amazon CloudFront distribution that has the ALB as an origin
- F. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint Create two domain name
- G. Point one domain name to the CloudFront DNS name for dynamic content, Point the other domain name to the accelerator DNS name for static content Use the domain names as endpoints for the web application.
Answer: D
NEW QUESTION 7
A company wants to analyze and troubleshoot Access Denied errors and unauthorized errors that ate related to IAM permissions. The company has AWS ClouTrail turned on.
Which solution will meet these requirements with the LEAST effort?
- A. Use AWS Glue and mile custom scripts lo query CloudTrail logs for the errors.
- B. Use AWS Batch and write custom scripts to query CloudTrail logs for the errors.
- C. Search CloudTrail logs will Amazon Athena queries to identify the errors
- D. Search CloudTrail logs with Amazon QuicKSight Create a dashboard to identify the errors
Answer: C
NEW QUESTION 8
A company uses NFS to store large video files in on-premises network attached storage. Each video file ranges in size from 1MB to 500 GB. The total storage is 70 TB and is no longer growing. The company decides to migrate the video files to Amazon S3. The company must migrate the video files as soon as possible while using the least possible network bandwidth.
Which solution will meet these requirements?
- A. Create an S3 bucket Create an 1AM role that has permissions to write to the S3 bucke
- B. Use the AWS CLI to copy all files locally to the S3 bucket.
- C. Create an AWS Snowball Edge jo
- D. Receive a Snowball Edge device on premise
- E. Use the Snowball Edge client to transfer data to the devic
- F. Return the device so that AWS can import the data intoAmazon S3.
- G. Deploy an S3 File Gateway on premise
- H. Create a public service endpoint to connect to the S3 File Gateway Create an S3 bucket Create a new NFS file share on the S3 File Gateway Point the new file share to the S3 bucke
- I. Transfer the data from the existing NFS file share to the S3 File Gateway.
- J. Set up an AWS Direct Connect connection between the on-premises network and AW
- K. Deploy an S3 File Gateway on premise
- L. Create a public virtual interlace (VIF) to connect to the S3 File Gatewa
- M. Create an S3 bucke
- N. Create a new NFS file share on the S3 File Gatewa
- O. Point the new file share to the S3 bucke
- P. Transfer the data from the existing NFS file share to the S3 File Gateway.
Answer: C
NEW QUESTION 9
A gaming company wants to launch a new internet-facing application in multiple AWS Regions. The application will use the TCP and UDP protocols for communication. The company needs to provide high availability and minimum latency for global users.
Which combination of actions should a solutions architect take to meet these requirements? (Select TWO.)
- A. Create internal Network Load Balancers in front of the application in each Region
- B. Create external Application Load Balancers in front of the application in each Region
- C. Create an AWS Global Accelerator accelerator to route traffic to the load balancers in each Region
- D. Configure Amazon Route 53 to use a geolocation routing policy to distribute the traffic
- E. Configure Amazon CloudFront to handle the traffic and route requests to the application in each Region
Answer: AC
NEW QUESTION 10
A solution architect is creating a new Amazon CloudFront distribution for an application Some of Ine information submitted by users is sensitive. The application uses HTTPS but needs another layer" of security The sensitive information should be protected throughout the entire application stack end access to the information should be restricted to certain applications
Which action should the solutions architect take?
- A. Configure a CloudFront signed URL
- B. Configure a CloudFront signed cookie.
- C. Configure a CloudFront field-level encryption profile
- D. Configure CloudFront and set the Origin Protocol Policy setting to HTTPS Only for the Viewer Protocol Policy
Answer: C
NEW QUESTION 11
A solution architect is using an AWS CloudFormation template to deploy a three-tier web application. The web application consist of a web tier and an application that stores and retrieves user data in Amazon DynamoDB tables. The web and application tiers are hosted on Amazon EC2 instances, and the database tier is not publicly accessible. The application EC2 instances need to access the Dynamo tables Without exposing API credentials in the template.
What should the solution architect do to meet the requirements?
- A. Create an IAM role to read the DynamoDB table
- B. Associate the role with the application instances by referencing an instance profile.
- C. Create an IAM role that has the required permissions to read and write from the DynamoDB table
- D. Add the role to the EC2 instance profile, and associate the instances profile with the application instances.
- E. Use the parameter section in the AWS CloudFormation template to have the user input access and secret keys from an already-created IAM user that has the required permissions to read and write from the DynamoDB tables.
- F. Create an IAM user in the AWS CloudFormation template that has the required permissions to read and write from the DynamoDB table
- G. Use the GetAtt function to retrieve the access secret keys, and pass them to the application instances through the user data.
Answer: B
NEW QUESTION 12
A company is running a high performance computing (HPC) workload on AWS across many Linux based Amazon EC2 instances. The company needs a shared storage system that is capable of sub-millisecond latencies, hundreds of Gbps of throughput and millions of IOPS. Users will store millions of small files.
Which solution meets these requirements?
- A. Create an Amazon Elastic File System (Amazon EFS) file system Mount me file system on each of the EC2 instances
- B. Create an Amazon S3 bucket Mount the S3 bucket on each of the EC2 instances
- C. Ensure that the EC2 instances ate Amazon Elastic Block Store (Amazon EBS) optimized Mount Provisioned lOPS SSD (io2) EBS volumes with Multi-Attach on each instance
- D. Create an Amazon FSx for Lustre file syste
- E. Mount the file system on each of the EC2 instances
Answer: D
NEW QUESTION 13
A company's application integrates with multiple software-as-a-service (SaaS) sources for data collection. The company runs Amazon EC2 instances to receive the data and to upload the data to an Amazon S3 bucket for analysis. The same EC2 instance that receives and uploads the data also sends a notification to the user when an upload is complete. The company has noticed slow application performance and wants to improve the performance as much as possible.
Which solution will meet these requirements with the LEAST operational overhead?
- A. Create an Auto Scaling group so that EC2 instances can scale ou
- B. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
- C. Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket.Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
- D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for each SaaS source to send output dat
- E. Configure the S3 bucket as the rule's targe
- F. Create a second EventBridge (CloudWatch Events) rule to send events when the upload to the S3 bucket is complet
- G. Configure an Amazon Simple Notification Service (Amazon SNS) topic as the second rule's target.
- H. Create a Docker container to use instead of an EC2 instanc
- I. Host the containerized application on Amazon Elastic Container Service (Amazon ECS). Configure Amazon CloudWatch Container Insights to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
Answer: B
NEW QUESTION 14
A company hosts a containerized web application on a fleet of on-premises servers that process incoming requests. The number of requests is growing quickly. The on-premises servers cannot handle the increased number of requests. The company wants to move the application to AWS with minimum code changes and minimum development effort.
Which solution will meet these requirements with the LEAST operational overhead?
- A. Use AWS Fargate on Amazon Elastic Container Service (Amazon ECS) to run the containerized web application with Service Auto Scalin
- B. Use an Application Load Balancer to distribute the incoming requests.
- C. Use two Amazon EC2 instances to host the containerized web applicatio
- D. Use an Application Load Balancer to distribute the incoming requests
- E. Use AWS Lambda with a new code that uses one of the supported language
- F. Create multiple Lambda functions to support the loa
- G. Use Amazon API Gateway as an entry point to the Lambda functions.
- H. Use a high performance computing (HPC) solution such as AWS ParallelClusterto establish an HPC cluster that can process the incoming requests at the appropriate scale.
Answer: A
NEW QUESTION 15
A company wants to reduce the cost of its existing three-tier web architect. The web, application, and database servers are running on Amazon EC2 instance EC2 instance for the development, test and production environments. The EC2 instances average 30% CPU utilization during peak hours and 10% CPU utilization during non-peak hours.
The production EC2 instance purchasing solution will meet the company’s requirements MOST cost-effectively?
- A. Use Spot Instances for the production EC2 instance
- B. Use Reserved Instances for the development and test EC2 instances
- C. Use Reserved Instances for the production EC2 instance
- D. Use On-Demand Instances for the development and test EC2 instances
- E. Use blocks for the production FC2 ins ranges Use Reserved instances for the development and lest EC2 instances
- F. Use On-Demand Instances for the production EC2 instance
- G. Use Spot blocks for the development and test EC2 instances
Answer: B
NEW QUESTION 16
A company has an AWS Glue extract. transform, and load (ETL) job that runs every day at the same time. The job processes XML data that is in an Amazon S3 bucket.
New data is added to the S3 bucket every day. A solutions architect notices that AWS Glue is processing all
the data during each run.
What should the solutions architect do to prevent AWS Glue from reprocessing old data?
- A. Edit the job to use job bookmarks.
- B. Edit the job to delete data after the data is processed
- C. Edit the job by setting the NumberOfWorkers field to 1.
- D. Use a FindMatches machine learning (ML) transform.
Answer: B
NEW QUESTION 17
A solutions architect is designing a new hybrid architecture to extend a company s on-premises infrastructure to AWS The company requires a highly available connection with consistent low latency to an AWS Region. The company needs to minimize costs and is willing to accept slower traffic if the primary connection fails.
What should the solutions architect do to meet these requirements?
- A. Provision an AWS Direct Connect connection to a Region Provision a VPN connection as a backup if the primary Direct Connect connection fails.
- B. Provision a VPN tunnel connection to a Region for private connectivit
- C. Provision a second VPN tunnel for private connectivity and as a backup if the primary VPN connection fails.
- D. Provision an AWS Direct Connect connection to a Region Provision a second Direct Connect connection to the same Region as a backup if the primary Direct Connect connection fails.
- E. Provision an AWS Direct Connect connection to a Region Use the Direct Connect failover attribute from the AWS CLI to automatically create a backup connection if the primary Direct Connect connection fails.
Answer: A
NEW QUESTION 18
A company that hosts its web application on AWS wants to ensure all Amazon EC2 instances. Amazon RDS DB instances. and Amazon Redshift clusters are configured with tags. The company wants to minimize the effort of configuring and operating this check.
What should a solutions architect do to accomplish this?
- A. Use AWS Config rules to define and detect resources that are not properly tagged.
- B. Use Cost Explorer to display resources that are not properly tagge
- C. Tag those resources manually.
- D. Write API calls to check all resources for proper tag allocatio
- E. Periodically run the code on an EC2 instance.
- F. Write API calls to check all resources for proper tag allocatio
- G. Schedule an AWS Lambda function through Amazon CloudWatch to periodically run the code.
Answer: A
NEW QUESTION 19
A company stores data in an Amazon Aurora PostgreSQL DB cluster. The company must store all the data for 5 years and must delete all the data after 5 years. The company also must indefinitely keep audit logs of actions that are performed within the database. Currently, the company has automated backups configured for Aurora.
Which combination of steps should a solutions architect take to meet these requirements? (Select TWO.)
- A. Take a manual snapshot of the DB cluster.
- B. Create a lifecycle policy for the automated backups.
- C. Configure automated backup retention for 5 years.
- D. Configure an Amazon CloudWatch Logs export for the DB cluster.
- E. Use AWS Backup to take the backups and to keep the backups for 5 years.
Answer: AD
Thanks for reading the newest SAA-C03 exam dumps! We recommend you to try the PREMIUM Dumpscollection.com SAA-C03 dumps in VCE and PDF here: https://www.dumpscollection.net/dumps/SAA-C03/ (0 Q&As Dumps)