aiotestking uk

Professional-Cloud-DevOps-Engineer Exam Questions - Online Test


Professional-Cloud-DevOps-Engineer Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

Testking offers free demo for Professional-Cloud-DevOps-Engineer exam. "Google Cloud Certified - Professional Cloud DevOps Engineer Exam", also known as Professional-Cloud-DevOps-Engineer exam, is a Google Certification. This set of posts, Passing the Google Professional-Cloud-DevOps-Engineer exam, will help you answer those questions. The Professional-Cloud-DevOps-Engineer Questions & Answers covers all the knowledge points of the real exam. 100% real Google Professional-Cloud-DevOps-Engineer exams and revised by experts!

Free demo questions for Google Professional-Cloud-DevOps-Engineer Exam Dumps Below:

NEW QUESTION 1
You are managing the production deployment to a set of Google Kubernetes Engine (GKE) clusters. You want to make sure only images which are successfully built by your trusted CI/CD pipeline are deployed to production. What should you do?

  • A. Enable Cloud Security Scanner on the clusters.
  • B. Enable Vulnerability Analysis on the Container Registry.
  • C. Set up the Kubernetes Engine clusters as private clusters.
  • D. Set up the Kubernetes Engine clusters with Binary Authorization.

Answer: D

Explanation:
https://cloud.google.com/binary-authorization/docs/overview

NEW QUESTION 2
You support a large service with a well-defined Service Level Objective (SLO). The development team deploys new releases of the service multiple times a week. If a major incident causes the service to miss its SLO, you want the development team to shift its focus from working on features to improving service reliability. What should you do before a major incident occurs?

  • A. Develop an appropriate error budget policy in cooperation with all service stakeholders.
  • B. Negotiate with the product team to always prioritize service reliability over releasing new features.
  • C. Negotiate with the development team to reduce the release frequency to no more than once a week.
  • D. Add a plugin to your Jenkins pipeline that prevents new releases whenever your service is out of SLO.

Answer: A

Explanation:
Reason : Incident has not occurred yet, even when development team is already pushing new features multiple times a week. The option A says, to define an error budget "policy", not to define error budget(It is already present). Just simple means to bring in all stakeholders, and decide how to consume the error budget effectively that could bring balance between feature deployment and reliability.
The goals of this policy are to: -- Protect customers from repeated SLO misses -- Provide an incentive to balance reliability with other features https://sre.google/workbook/error-budget-policy/

NEW QUESTION 3
You have a set of applications running on a Google Kubernetes Engine (GKE) cluster, and you are using Stackdriver Kubernetes Engine Monitoring. You are bringing a new containerized application required by your company into production. This application is written by a third party and cannot be modified or reconfigured. The application writes its log information to /var/log/app_messages.log, and you want to send these log entries to Stackdriver Logging. What should you do?

  • A. Use the default Stackdriver Kubernetes Engine Monitoring agent configuration.
  • B. Deploy a Fluentd daemonset to GK
  • C. Then create a customized input and output configuration to tail the log file in the application's pods and write to Slackdriver Logging.
  • D. Install Kubernetes on Google Compute Engine (GCE> and redeploy your application
  • E. Then customize the built-in Stackdriver Logging configuration to tail the log file in the application's pods and write to Stackdriver Logging.
  • F. Write a script to tail the log file within the pod and write entries to standard outpu
  • G. Run the script as a sidecar container with the application's po
  • H. Configure a shared volume between the containers to allow the script to have read access to /var/log in the application container.

Answer: B

Explanation:
https://cloud.google.com/architecture/customizing-stackdriver-logs-fluentd
Besides the list of default logs that the Logging agent streams by default, you can customize the Logging agent to send additional logs to Logging or to adjust agent settings by adding input configurations. The configuration definitions in these sections apply to the fluent-plugin-google-cloud output plugin only and specify how logs are transformed and ingested into Cloud Logging. https://cloud.google.com/logging/docs/agent/logging/configuration#configure

NEW QUESTION 4
You are running an application on Compute Engine and collecting logs through Stackdriver. You discover that some personally identifiable information (Pll) is leaking into certain log entry fields. All Pll entries begin with the text userinfo. You want to capture these log entries in a secure location for later review and prevent them from leaking to Stackdriver Logging. What should you do?

  • A. Create a basic log filter matching userinfo, and then configure a log export in the Stackdriver console with Cloud Storage as a sink.
  • B. Use a Fluentd filter plugin with the Stackdriver Agent to remove log entries containing userinfo, and then copy the entries to a Cloud Storage bucket.
  • C. Create an advanced log filter matching userinfo, configure a log export in the Stackdriver console with Cloud Storage as a sink, and then configure a tog exclusion with userinfo as a filter.
  • D. Use a Fluentd filter plugin with the Stackdriver Agent to remove log entries containing userinfo, create an advanced log filter matching userinfo, and then configure a log export in the Stackdriver console with Cloud Storage as a sink.

Answer: B

Explanation:
https://medium.com/google-cloud/fluentd-filter-plugin-for-google-cloud-data-loss-prevention-api-42bbb1308e7

NEW QUESTION 5
You are performing a semiannual capacity planning exercise for your flagship service. You expect a service user growth rate of 10% month-over-month over the next six months. Your service is fully containerized and runs on Google Cloud Platform (GCP). using a Google Kubernetes Engine (GKE) Standard regional cluster on three zones with cluster autoscaler enabled. You currently consume about 30% of your total deployed CPU capacity, and you require resilience against the failure of a zone. You want to ensure that your users experience minimal negative impact as a result of this growth or as a result of zone failure, while avoiding unnecessary costs. How should you prepare to handle the predicted growth?

  • A. Verity the maximum node pool size, enable a horizontal pod autoscaler, and then perform a load test to verity your expected resource needs.
  • B. Because you are deployed on GKE and are using a cluster autoscale
  • C. your GKE cluster will scale automatically, regardless of growth rate.
  • D. Because you are at only 30% utilization, you have significant headroom and you won't need to add any additional capacity for this rate of growth.
  • E. Proactively add 60% more node capacity to account for six months of 10% growth rate, and then perform a load test to make sure you have enough capacity.

Answer: A

Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler
The Horizontal Pod Autoscaler changes the shape of your Kubernetes workload by automatically increasing or decreasing the number of Pods in response to the workload's CPU or memory consumption

NEW QUESTION 6
Your application images are built using Cloud Build and pushed to Google Container Registry (GCR). You want to be able to specify a particular version of your application for deployment based on the release version tagged in source control. What should you do when you push the image?

  • A. Reference the image digest in the source control tag.
  • B. Supply the source control tag as a parameter within the image name.
  • C. Use Cloud Build to include the release version tag in the application image.
  • D. Use GCR digest versioning to match the image to the tag in source control.

Answer: B

Explanation:
https://cloud.google.com/container-registry/docs/pushing-and-pulling

NEW QUESTION 7
You need to deploy a new service to production. The service needs to automatically scale using a Managed Instance Group (MIG) and should be deployed over multiple regions. The service needs a large number of resources for each instance and you need to plan for capacity. What should you do?

  • A. Use the n1-highcpu-96 machine type in the configuration of the MIG.
  • B. Monitor results of Stackdriver Trace to determine the required amount of resources.
  • C. Validate that the resource requirements are within the available quota limits of each region.
  • D. Deploy the service in one region and use a global load balancer to route traffic to this region.

Answer: C

Explanation:
https://cloud.google.com/compute/quotas#understanding_quotas https://cloud.google.com/compute/quotas

NEW QUESTION 8
You are running a real-time gaming application on Compute Engine that has a production and testing environment. Each environment has their own Virtual Private Cloud (VPC) network. The application frontend and backend servers are located on different subnets in the environment's VPC. You suspect there is a malicious process communicating intermittently in your production frontend servers. You want to ensure that network traffic is captured for analysis. What should you do?

  • A. Enable VPC Flow Logs on the production VPC network frontend and backend subnets only with a sample volume scale of 0.5.
  • B. Enable VPC Flow Logs on the production VPC network frontend and backend subnets only with a sample volume scale of 1.0.
  • C. Enable VPC Flow Logs on the testing and production VPC network frontend and backend subnets with a volume scale of 0.5. Apply changes intesting before production.
  • D. Enable VPC Flow Logs on the testing and production VPC network frontend and backend subnets with a volume scale of 1.0. Apply changes in testing before production.

Answer: D

NEW QUESTION 9
You support a multi-region web service running on Google Kubernetes Engine (GKE) behind a Global HTTP'S Cloud Load Balancer (CLB). For legacy reasons, user requests first go through a third-party Content Delivery Network (CDN). which then routes traffic to the CLB. You have already implemented an availability Service Level Indicator (SLI) at the CLB level. However, you want to increase coverage in case of a potential load balancer misconfiguration. CDN failure, or other global networking catastrophe. Where should you measure this new SLI?
Choose 2 answers

  • A. Your application servers' logs
  • B. Instrumentation coded directly in the client
  • C. Metrics exported from the application servers
  • D. GKE health checks for your application servers
  • E. A synthetic client that periodically sends simulated user requests

Answer: BE

NEW QUESTION 10
Your organization recently adopted a container-based workflow for application development. Your team develops numerous applications that are deployed continuously through an automated build pipeline to a Kubernetes cluster in the production environment. The security auditor is concerned that developers or operators could circumvent automated testing and push code changes to production without approval. What should you do to enforce approvals?

  • A. Configure the build system with protected branches that require pull request approval.
  • B. Use an Admission Controller to verify that incoming requests originate from approved sources.
  • C. Leverage Kubernetes Role-Based Access Control (RBAC) to restrict access to only approved users.
  • D. Enable binary authorization inside the Kubernetes cluster and configure the build pipeline as an attestor.

Answer: D

Explanation:
The keywords here is "developers or operators". Option A the operators could push images to production without approval (operators could touch the cluster directly and the cluster cannot do any action against them). Rest same as francisco_guerra.

NEW QUESTION 11
You are developing a strategy for monitoring your Google Cloud Platform (GCP) projects in production using Stackdriver Workspaces. One of the requirements is to be able to quickly identify and react to production environment issues without false alerts from development and staging projects. You want to ensure that you adhere to the principle of least privilege when providing relevant team members with access to Stackdriver Workspaces. What should you do?

  • A. Grant relevant team members read access to all GCP production project
  • B. Create Stackdriver workspaces inside each project.
  • C. Grant relevant team members the Project Viewer IAM role on all GCP production project
  • D. Create Slackdriver workspaces inside each project.
  • E. Choose an existing GCP production project to host the monitoring workspac
  • F. Attach the production projects to this workspac
  • G. Grant relevant team members read access to the Stackdriver Workspace.
  • H. Create a new GCP monitoring project, and create a Stackdriver Workspace inside i
  • I. Attach the production projects to this workspac
  • J. Grant relevant team members read access to the Stackdriver Workspace.

Answer: D

Explanation:
"A Project can host many Projects and appear in many Projects, but it can only be used as the scoping project once. We recommend that you create a new Project for the purpose of having multiple Projects in the same scope."

NEW QUESTION 12
You are responsible for the reliability of a high-volume enterprise application. A large number of users report that an important subset of the application’s functionality – a data intensive reporting feature – is consistently failing with an HTTP 500 error. When you investigate your application’s dashboards, you notice a strong correlation between the failures and a metric that represents the size of an internal queue used for generating reports. You trace the failures to a reporting backend that is experiencing high I/O wait times. You quickly fix the issue by resizing the backend’s persistent disk (PD). How you need to create an availability Service Level Indicator (SLI) for the report generation feature. How would you define it?

  • A. As the I/O wait times aggregated across all report generation backends
  • B. As the proportion of report generation requests that result in a successful response
  • C. As the application’s report generation queue size compared to a known-good threshold
  • D. As the reporting backend PD throughout capacity compared to a known-good threshold

Answer: B

Explanation:
According to SRE Workbook, one of potential SLI is as below:
* Type of service: Request-driven
* Type of SLI: Availability
* Description: The proportion of requests that resulted in a successful response. https://sre.google/workbook/implementing-slos/

NEW QUESTION 13
Your team has recently deployed an NGINX-based application into Google Kubernetes Engine (GKE) and has exposed it to the public via an HTTP Google Cloud Load Balancer (GCLB) ingress. You want to scale the deployment of the application's frontend using an appropriate Service Level Indicator (SLI). What should you do?

  • A. Configure the horizontal pod autoscaler to use the average response time from the Liveness and Readiness probes.
  • B. Configure the vertical pod autoscaler in GKE and enable the cluster autoscaler to scale the cluster as pods expand.
  • C. Install the Stackdriver custom metrics adapter and configure a horizontal pod autoscaler to use the number of requests provided by the GCLB.
  • D. Expose the NGINX stats endpoint and configure the horizontal pod autoscaler to use the request metrics exposed by the NGINX deployment.

Answer: C

Explanation:
https://cloud.google.com/kubernetes-engine/docs/tutorials/autoscaling-metrics

NEW QUESTION 14
Your application artifacts are being built and deployed via a CI/CD pipeline. You want the CI/CD pipeline to securely access application secrets. You also want to more easily rotate secrets in case of a security breach. What should you do?

  • A. Prompt developers for secrets at build tim
  • B. Instruct developers to not store secrets at rest.
  • C. Store secrets in a separate configuration file on Gi
  • D. Provide select developers with access to the configuration file.
  • E. Store secrets in Cloud Storage encrypted with a key from Cloud KM
  • F. Provide the CI/CD pipeline with access to Cloud KMS via IAM.
  • G. Encrypt the secrets and store them in the source code repositor
  • H. Store a decryption key in a separate repository and grant your pipeline access to it

Answer: C

NEW QUESTION 15
You support a high-traffic web application and want to ensure that the home page loads in a timely manner. As a first step, you decide to implement a Service Level Indicator (SLI) to represent home page request latency with an acceptable page load time set to 100 ms. What is the Google-recommended way of calculating this SLI?

  • A. Buckelize Ihe request latencies into ranges, and then compute the percentile at 100 ms.
  • B. Bucketize the request latencies into ranges, and then compute the median and 90th percentiles.
  • C. Count the number of home page requests that load in under 100 ms, and then divide by the total number of home page requests.
  • D. Count the number of home page requests that load in under 100 m
  • E. and then divide by the total number of all web application requests.

Answer: C

Explanation:
https://sre.google/workbook/implementing-slos/
In the SRE principles book, it's recommended treating the SLI as the ratio of two numbers: the number of good events divided by the total number of events. For example: Number of successful HTTP requests / total HTTP requests (success rate)

NEW QUESTION 16
You need to define Service Level Objectives (SLOs) for a high-traffic multi-region web application. Customers expect the application to always be available and have fast response times. Customers are currently happy with the application performance and availability. Based on current measurement, you observe that the 90th percentile of latency is 120ms and the 95th percentile of latency is 275ms over a 28-day window. What latency SLO would you recommend to the team to publish?

  • A. 90th percentile – 100ms 95th percentile – 250ms
  • B. 90th percentile – 120ms 95th percentile – 275ms
  • C. 90th percentile – 150ms 95th percentile – 300ms
  • D. 90th percentile – 250ms 95th percentile – 400ms

Answer: C

Explanation:
https://sre.google/sre-book/service-level-objectives/

NEW QUESTION 17
You need to reduce the cost of virtual machines (VM| for your organization. After reviewing different options, you decide to leverage preemptible VM instances. Which application is suitable for preemptible VMs?

  • A. A scalable in-memory caching system
  • B. The organization's public-facing website
  • C. A distributed, eventually consistent NoSQL database cluster with sufficient quorum
  • D. A GPU-accelerated video rendering platform that retrieves and stores videos in a storage bucket

Answer: D

Explanation:
https://cloud.google.com/compute/docs/instances/preemptible

NEW QUESTION 18
You support a Node.js application running on Google Kubernetes Engine (GKE) in production. The application makes several HTTP requests to dependent applications. You want to anticipate which dependent applications might cause performance issues. What should you do?

  • A. Instrument all applications with Stackdriver Profiler.
  • B. Instrument all applications with Stackdriver Trace and review inter-service HTTP requests.
  • C. Use Stackdriver Debugger to review the execution of logic within each application to instrument all applications.
  • D. Modify the Node.js application to log HTTP request and response times to dependent application
  • E. Use Stackdriver Logging to find dependent applications that are performing poorly.

Answer: B

NEW QUESTION 19
......

Recommend!! Get the Full Professional-Cloud-DevOps-Engineer dumps in VCE and PDF From Allfreedumps.com, Welcome to Download: https://www.allfreedumps.com/Professional-Cloud-DevOps-Engineer-dumps.html (New 81 Q&As Version)