aiotestking uk

Professional-Cloud-Architect Exam Questions - Online Test


Professional-Cloud-Architect Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

Proper study guides for Far out Google Google Certified Professional - Cloud Architect (GCP) certified begins with Google Professional-Cloud-Architect preparation products which designed to deliver the 100% Guarantee Professional-Cloud-Architect questions by making you pass the Professional-Cloud-Architect test at your first time. Try the free Professional-Cloud-Architect demo right now.

Online Google Professional-Cloud-Architect free dumps demo Below:

NEW QUESTION 1

A production database virtual machine on Google Compute Engine has an ext4-formatted persistent disk for data files The database is about to run out of storage space How can you remediate the problem with the least amount of downtime?

  • A. In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.
  • B. Shut down the virtual machine, use the Cloud Platform Console to increase the persistent disk size, then restart the virtual machine.
  • C. In the Cloud Platform Console, increase the size of the persistent disk and verify the new space is ready to use with the fdisk command in Linux.
  • D. In the Cloud Platform Console, create a new persistent disk attached to the virtual machine, format and mount it, and configure the database service to move the files to the new disk.
  • E. In the Cloud Platform Console, create a snapshot of the persistent disk, restore the snapshot to a new larger disk, unmount the old disk, mount the new disk, and restart the database service.

Answer: A

Explanation:
On Linux instances, connect to your instance and manually resize your partitions and file systems to use the additional disk space that you added.
Extend the file system on the disk or the partition to use the added space. If you grew a partition on your disk, specify the partition. If your disk does not have a partition table, specify only the disk ID.
sudo resize2fs /dev/[DISK_ID][PARTITION_NUMBER]
where [DISK_ID] is the device name and [PARTITION_NUMBER] is the partition number for the device where you are resizing the file system.
References: https://cloud.google.com/compute/docs/disks/add-persistent-disk

NEW QUESTION 2

Your customer runs a web service used by e-commerce sites to offer product recommendations to users. The company has begun experimenting with a machine learning model on Google Cloud Platform to improve the quality of results.
What should the customer do to improve their model’s results over time?

  • A. Export Cloud Machine Learning Engine performance metrics from Stackdriver to BigQuery, to be used to analyze the efficiency of the model.
  • B. Build a roadmap to move the machine learning model training from Cloud GPUs to Cloud TPUs, which offer better results.
  • C. Monitor Compute Engine announcements for availability of newer CPU architectures, and deploy the model to them as soon as they are available for additional performance.
  • D. Save a history of recommendations and results of the recommendations in BigQuery, to be used as training data.

Answer: D

Explanation:
https://cloud.google.com/solutions/building-a-serverless-ml-model

NEW QUESTION 3

One of the developers on your team deployed their application in Google Container Engine with the Dockerfile below. They report that their application deployments are taking too long.
Professional-Cloud-Architect dumps exhibit
You want to optimize this Dockerfile for faster deployment times without adversely affecting the app’s functionality.
Which two actions should you take? Choose 2 answers.

  • A. Remove Python after running pip.
  • B. Remove dependencies from requirements.txt.
  • C. Use a slimmed-down base image like Alpine linux.
  • D. Use larger machine types for your Google Container Engine node pools.
  • E. Copy the source after the package dependencies (Python and pip) are installed.

Answer: CE

Explanation:
The speed of deployment can be changed by limiting the size of the uploaded app, limiting the complexity of the build necessary in the Dockerfile, if present, and by ensuring a fast and reliable internet connection.
Note: Alpine Linux is built around musl libc and busybox. This makes it smaller and more resource efficient than traditional GNU/Linux distributions. A container requires no more than 8 MB and a minimal installation to disk requires around 130 MB of storage. Not only do you get a fully-fledged Linux environment but a large selection of packages from the repository.
References: https://groups.google.com/forum/#!topic/google-appengine/hZMEkmmObDU https://www.alpinelinux.org/about/

NEW QUESTION 4

For this question, refer to the TerramEarth case study.
TerramEarth has equipped unconnected trucks with servers and sensors to collet telemetry data. Next year they want to use the data to train machine learning models. They want to store this data in the cloud while reducing costs. What should they do?

  • A. Have the vehicle’ computer compress the data in hourly snapshots, and store it in a Google Cloud storage (GCS) Nearline bucket.
  • B. Push the telemetry data in Real-time to a streaming dataflow job that compresses the data, and store it in Google BigQuery.
  • C. Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in Cloud Bigtable.
  • D. Have the vehicle's computer compress the data in hourly snapshots, a Store it in a GCS Coldline bucket.

Answer: D

Explanation:
Coldline Storage is the best choice for data that you plan to access at most once a year, due to its slightly lower availability, 90-day minimum storage duration, costs for data access, and higher per-operation costs. For example:
Cold Data Storage - Infrequently accessed data, such as data stored for legal or regulatory reasons, can be stored at low cost as Coldline Storage, and be available when you need it.
Disaster recovery - In the event of a disaster recovery event, recovery time is key. Cloud Storage provides low latency access to data stored as Coldline Storage.
References: https://cloud.google.com/storage/docs/storage-classes

NEW QUESTION 5

You need to design a solution for global load balancing based on the URL path being requested. You need to ensure operations reliability and end-to-end in-transit encryption based on Google best practices.
What should you do?

  • A. Create a cross-region load balancer with URL Maps.
  • B. Create an HTTPS load balancer with URL maps.
  • C. Create appropriate instance groups and instance
  • D. Configure SSL proxy load balancing.
  • E. Create a global forwarding rul
  • F. Configure SSL proxy balancing.

Answer: B

Explanation:
Reference https://cloud.google.com/load-balancing/docs/https/url-map

NEW QUESTION 6

The application reliability team at your company has added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis. The event records are at least 50 KB and at most 15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss.
Which process should you implement?

  • A. • Append metadata to file body.• Compress individual files.• Name files with serverName-Timestamp.• Create a new bucket if bucket is older than 1 hour and save individual files to the new bucke
  • B. Otherwise, save files to existing bucket
  • C. • Batch every 10,000 events with a single manifest file for metadata.• Compress event files and manifest file into a single archive file.• Name files using serverName-EventSequence.• Create a new bucket if bucket is older than 1 day and save the single archive file to the new bucke
  • D. Otherwise, save the single archive file to existing bucket.
  • E. • Compress individual files.• Name files with serverName-EventSequence.• Save files to one bucket• Set custom metadata headers for each object after saving.
  • F. • Append metadata to file body.• Compress individual files.• Name files with a random prefix pattern.• Save files to one bucket

Answer: D

Explanation:
In order to maintain a high request rate, avoid using sequential names. Using completely random object names will give you the best load distribution. Randomness after a common prefix is effective under the prefix https://cloud.google.com/storage/docs/request-rate

NEW QUESTION 7

You deploy your custom java application to google app engine. It fails to deploy and gives you the following stack trace:
Professional-Cloud-Architect dumps exhibit

  • A. Recompile the CLoakedServlet class using and MD5 hash instead of SHA1
  • B. Digitally sign all of your JAR files and redeploy your application.
  • C. Upload missing JAR files and redeploy your application

Answer: B

NEW QUESTION 8

Your customer is moving their corporate applications to Google Cloud Platform. The security team wants detailed visibility of all projects in the organization. You provision the Google Cloud Resource Manager and set up yourself as the org admin. What Google Cloud Identity and Access Management (Cloud IAM) roles should you give to the security team'?

  • A. Org viewer, project owner
  • B. Org viewer, project viewer
  • C. Org admin, project browser
  • D. Project owner, network admin

Answer: B

Explanation:
https://cloud.google.com/iam/docs/using-iam-securely

NEW QUESTION 9

Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier (web, API, and database) scales independently of the others Network traffic should flow through the web to the API tier and then on to the database tier. Traffic should not flow between the web and the database tier. How should you configure the network?

  • A. Add each tier to a different subnetwork.
  • B. Set up software based firewalls on individual VMs.
  • C. Add tags to each tier and set up routes to allow the desired traffic flow.
  • D. Add tags to each tier and set up firewall rules to allow the desired traffic flow.

Answer: D

Explanation:
https://aws.amazon.com/blogs/aws/building-three-tier-architectures-with-security-groups/
Google Cloud Platform(GCP) enforces firewall rules through rules and tags. GCP rules and tags can be defined once and used across all regions.
References: https://cloud.google.com/docs/compare/openstack/ https://aws.amazon.com/it/blogs/aws/building-three-tier-architectures-with-security-groups/

NEW QUESTION 10

You are using Cloud CDN to deliver static HTTP(S) website content hosted on a Compute Engine instance group. You want to improve the cache hit ratio.
What should you do?

  • A. Customize the cache keys to omit the protocol from the key.
  • B. Shorten the expiration time of the cached objects.
  • C. Make sure the HTTP(S) header “Cache-Region” points to the closest region of your users.
  • D. Replicate the static content in a Cloud Storage bucke
  • E. Point CloudCDN toward a load balancer on that bucket.

Answer: A

Explanation:
Reference https://cloud.google.com/cdn/docs/bestpractices#using_custom_cache_keys_to_improve_cache_hit_ratio

NEW QUESTION 11

You have been engaged by your client to lead the migration of their application infrastructure to GCP. One of their current problems is that the on-premises high performance SAN is requiring frequent and expensive upgrades to keep up with the variety of workloads that are identified as follows: 20TB of log archives retained for legal reasons; 500 GB of VM boot/data volumes and templates; 500 GB of image thumbnails; 200 GB of customer session state data that allows customers to restart sessions even if off-line for several days.
Which of the following best reflects your recommendations for a cost-effective storage allocation?

  • A. Local SSD for customer session state dat
  • B. Lifecycle-managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes.
  • C. Memcache backed by Cloud Datastore for the customer session state dat
  • D. Lifecycle- managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes.
  • E. Memcache backed by Cloud SQL for customer session state dat
  • F. Assorted local SSD-backed instances for VM boot/data volume
  • G. Cloud Storage for log archives and thumbnails.
  • H. Memcache backed by Persistent Disk SSD storage for customer session state dat
  • I. Assorted local SSDbacked instances for VM boot/data volume
  • J. Cloud Storage for log archives and thumbnails.

Answer: D

Explanation:
https://cloud.google.com/compute/docs/disks

NEW QUESTION 12

Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion sensor that reports its status every second. The data from the motion detector includes only a sensor ID and several different discrete items of information. Analysts will use this data, together with information about account owners and office locations. Which database type should you use?

  • A. Flat file
  • B. NoSQL
  • C. Relational
  • D. Blobstore

Answer: B

Explanation:
Relational databases were not designed to cope with the scale and agility challenges that face modern applications, nor were they built to take advantage of the commodity storage and processing power available today.
NoSQL fits well for:
Developers are working with applications that create massive volumes of new, rapidly changing data types — structured, semi-structured, unstructured and polymorphic data.

NEW QUESTION 13

Your customer support tool logs all email and chat conversations to Cloud Bigtable for retention and analysis. What is the recommended approach for sanitizing this data of personally identifiable information or payment card information before initial storage?

  • A. Hash all data using SHA256
  • B. Encrypt all data using elliptic curve cryptography
  • C. De-identify the data with the Cloud Data Loss Prevention API
  • D. Use regular expressions to find and redact phone numbers, email addresses, and credit card numbers

Answer: A

Explanation:
Reference: https://cloud.google.com/solutions/pci-dss-compliance-ingcp#

NEW QUESTION 14

For this question, refer to the Mountkirk Games case study.
Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?

  • A. Create a scalable environment in GCP for simulating production load.
  • B. Use the existing infrastructure to test the GCP-based backend at scale.
  • C. Build stress tests into each component of your application using resources internal to GCP to simulate load.
  • D. Create a set of static environments in GCP to test different levels of load — for example, high, medium, and low.

Answer: A

Explanation:
From scenario: Requirements for Game Backend Platform
Professional-Cloud-Architect dumps exhibit Dynamically scale up or down based on game activity
Professional-Cloud-Architect dumps exhibit Connect to a managed NoSQL database service
Professional-Cloud-Architect dumps exhibit Run customize Linux distro

NEW QUESTION 15

Your company is building a new architecture to support its data-centric business focus. You are responsible for setting up the network. Your company’s mobile and web-facing applications will be deployed on-premises, and all data analysis will be conducted in GCP. The plan is to process and load 7 years of archived .csv files totaling 900 TB of data and then continue loading 10 TB of data daily. You currently have an existing 100-MB internet connection.
What actions will meet your company’s needs?

  • A. Compress and upload both achieved files and files uploaded daily using the qsutil –m option.
  • B. Lease a Transfer Appliance, upload archived files to it, and send it, and send it to Google to transfer archived data to Cloud Storag
  • C. Establish a connection with Google using a Dedicated Interconnect or Direct Peering connection and use it to upload files daily.
  • D. Lease a Transfer Appliance, upload archived files to it, and send it, and send it to Google to transferarchived data to Cloud Storag
  • E. Establish one Cloud VPN Tunnel to VPC networks over the public internet, and compares and upload files daily using the gsutil –m option.
  • F. Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storag
  • G. Establish a Cloud VPN Tunnel to VPC networks over the public internet, and compress and upload files daily.

Answer: B

Explanation:
https://cloud.google.com/interconnect/docs/how-to/direct-peering

NEW QUESTION 16

You need to develop procedures to verify resilience of disaster recovery for remote recovery using GCP. Your production environment is hosted on-premises. You need to establish a secure, redundant connection between your on premises network and the GCP network.
What should you do?

  • A. Verify that Dedicated Interconnect can replicate files to GC
  • B. Verify that direct peering can establish a secure connection between your networks if Dedicated Interconnect fails.
  • C. Verify that Dedicated Interconnect can replicate files to GC
  • D. Verify that Cloud VPN can establish a secure connection between your networks if Dedicated Interconnect fails.
  • E. Verify that the Transfer Appliance can replicate files to GC
  • F. Verify that direct peering can establish a secure connection between your networks if the Transfer Appliance fails.
  • G. Verify that the Transfer Appliance can replicate files to GC
  • H. Verify that Cloud VPN can establish a secure connection between your networks if the Transfer Appliance fails.

Answer: B

Explanation:
https://cloud.google.com/interconnect/docs/how-to/direct-peering

NEW QUESTION 17

Your BigQuery project has several users. For audit purposes, you need to see how many queries each user ran in the last month.

  • A. Connect Google Data Studio to BigQuer
  • B. Create a dimension for the users and a metric for the amount of queries per user.
  • C. In the BigQuery interface, execute a query on the JOBS table to get the required information.
  • D. Use ‘bq show’ to list all job
  • E. Per job, use ‘bq Is’ to list job information and get the required information.
  • F. Use Cloud Audit Logging to view Cloud Audit Logs, and create a filter on the query operation to get the required information.

Answer: C

Explanation:
https://cloud.google.com/bigquery/docs/managing-jobs

NEW QUESTION 18

Your company has decided to build a backup replica of their on-premises user authentication PostgreSQL database on Google Cloud Platform. The database is 4 TB, and large updates are frequent. Replication requires private address space communication. Which networking approach should you use?

  • A. Google Cloud Dedicated Interconnect
  • B. Google Cloud VPN connected to the data center network
  • C. A NAT and TLS translation gateway installed on-premises
  • D. A Google Compute Engine instance with a VPN server installed connected to the data center network

Answer: A

Explanation:
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations
Google Cloud Dedicated Interconnect provides direct physical connections and RFC 1918 communication between your on-premises network and Google’s network. Dedicated Interconnect enables you to transfer large amounts of data between networks, which can be more cost effective than purchasing additional bandwidth over the public Internet or using VPN tunnels.
Benefits:
Professional-Cloud-Architect dumps exhibit Traffic between your on-premises network and your VPC network doesn't traverse the public Internet.
Traffic traverses a dedicated connection with fewer hops, meaning there are less points of failure where traffic might get dropped or disrupted.
Professional-Cloud-Architect dumps exhibit Your VPC network's internal (RFC 1918) IP addresses are directly accessible from your on-premises network. You don't need to use a NAT device or VPN tunnel to reach internal IP addresses. Currently, you can only reach internal IP addresses over a dedicated connection. To reach Google external IP addresses, you must use a separate connection.
Professional-Cloud-Architect dumps exhibit You can scale your connection to Google based on your needs. Connection capacity is delivered over one or more 10 Gbps Ethernet connections, with a maximum of eight connections (80 Gbps total per interconnect).
Professional-Cloud-Architect dumps exhibit The cost of egress traffic from your VPC network to your on-premises network is reduced. A dedicated connection is generally the least expensive method if you have a high-volume of traffic to and from Google’s network.
References: https://cloud.google.com/interconnect/docs/details/dedicated

NEW QUESTION 19

As part of implementing their disaster recovery plan, your company is trying to replicate their production MySQL database from their private data center to their GCP project using a Google Cloud VPN connection. They are experiencing latency issues and a small amount of packet loss that is disrupting the replication. What should they do?

  • A. Configure their replication to use UDP.
  • B. Configure a Google Cloud Dedicated Interconnect.
  • C. Restore their database daily using Google Cloud SQL.
  • D. Add additional VPN connections and load balance them.
  • E. Send the replicated transaction to Google Cloud Pub/Sub.

Answer: B

NEW QUESTION 20
......

100% Valid and Newest Version Professional-Cloud-Architect Questions & Answers shared by Downloadfreepdf.net, Get Full Dumps HERE: https://www.downloadfreepdf.net/Professional-Cloud-Architect-pdf-download.html (New 170 Q&As)