aiotestking uk

1Z0-062 Exam Questions - Online Test


1Z0-062 Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

Q1. Which three statements are true about Oracle Data Pump export and import operations? 

A. You can detach from a data pump export job and reattach later. 

B. Data pump uses parallel execution server processes to implement parallel import. 

C. Data pump import requires the import file to be in a directory owned by the oracle owner. 

D. The master table is the last object to be exported by the data pump. 

E. You can detach from a data pump import job and reattach later. 

Answer: A,B,D 

Explanation: B: Data Pump can employ multiple worker processes, running in parallel, to increase job performance. 

D: For export jobs, the master table records the location of database objects within a dump file set. / Export builds and maintains the master table for the duration of the job. At the end of an export job, the content of the master table is written to a file in the dump file set. / For import jobs, the master table is loaded from the dump file set and is used to control the sequence of operations for locating objects that need to be imported into the target database. 

Q2. Which three statements are true regarding the use of the Database Migration Assistant for Unicode (DMU)? 

A. A DBA can check specific tables with the DMU 

B. The database to be migrated must be opened read-only. 

C. The release of the database to be converted can be any release since 9.2.0.8. 

D. The DMU can report columns that are too long in the converted characterset. 

E. The DMU can report columns that are not represented in the converted characterset. 

Answer: A,D,E 

Explanation: A: In certain situations, you may want to exclude selected columns or tables from scanning or conversion steps of the migration process. 

D: Exceed column limit 

The cell data will not fit into a column after conversion. 

E: Need conversion 

The cell data needs to be converted, because its binary representation in the target character set is different than the representation in the current character set, but neither length limit issues nor invalid representation issues have been found. 

* Oracle Database Migration Assistant for Unicode (DMU) is a unique next-generation migration tool providing an end-to-end solution for migrating your databases from legacy encodings to Unicode. 

Incorrect: 

Not C: The release of Oracle Database must be 10.2.0.4, 10.2.0.5, 11.1.0.7, 11.2.0.1, or later. 

Q3. You are connected using SQL* Plus to a multitenant container database (CDB) with SYSDBA privileges and execute the following sequence statements: 

What is the result of the last SET CONTAINER statement and why is it so? 

A. It succeeds because the PDB_ADMIN user has the required privileges. 

B. It fails because common users are unable to use the SET CONTAINER statement. 

C. It fails because local users are unable to use the SET CONTAINER statement. 

D. If fails because the SET CONTAINER statement cannot be used with PDB$SEED as the target pluggable database (PDB). 

Answer:

Q4. Which three features work together, to allow a SQL statement to have different cursors for the same statement based on different selectivity ranges? 

A. Bind Variable Peeking 

B. SQL Plan Baselines 

C. Adaptive Cursor Sharing 

D. Bind variable used in a SQL statement 

E. Literals in a SQL statement 

Answer: A,C,E 

Explanation: * In bind variable peeking (also known as bind peeking), the optimizer looks at the value in a bind variable when the database performs a hard parse of a statement. 

When a query uses literals, the optimizer can use the literal values to find the best plan. However, when a query uses bind variables, the optimizer must select the best plan without the presence of literals in the SQL text. This task can be extremely difficult. By peeking at bind values the optimizer can determine the selectivity of a WHERE clause condition as if literals had been used, thereby improving the plan. 

C: Oracle 11g/12g uses Adaptive Cursor Sharing to solve this problem by allowing the server to compare the effectiveness of execution plans between executions with different bind variable values. If it notices suboptimal plans, it allows certain bind variable values, or ranges of values, to use alternate execution plans for the same statement. This functionality requires no additional configuration. 

Q5. Which two statements are true about variable extent size support for large ASM files? 

A. The metadata used to track extents in SGA is reduced. 

B. Rebalance operations are completed faster than with a fixed extent size 

C. An ASM Instance automatically allocates an appropriate extent size. 

D. Resync operations are completed faster when a disk comes online after being taken offline. 

E. Performance improves in a stretch cluster configuration by reading from a local copy of an extent. 

Answer: A,C 

Explanation: A: Variable size extents enable support for larger ASM datafiles, reduce SGA memory requirements for very large databases (A), and improve performance for file create and open operations. 

C: You don't have to worry about the sizes; the ASM instance automatically allocates the appropriate extent size. 

Note: 

* The contents of ASM files are stored in a disk group as a set, or collection, of data extents that are stored on individual disks within disk groups. Each extent resides on an individual disk. Extents consist of one or more allocation units (AU). To accommodate increasingly larger files, ASM uses variable size extents. 

* The size of the extent map that defines a file can be smaller by a factor of 8 and 64 depending on the file size. The initial extent size is equal to the allocation unit size and it increases by a factor of 8 and 64 at predefined thresholds. This feature is automatic for newly created and resized datafiles when the disk group compatibility attributes are set to Oracle Release 11 or higher. 

Q6. You Execute the Following command to create a password file in the database server: 

$ orapwd file = ‘+DATA/PROD/orapwprod entries = 5 ignorecase = N format = 12’ Which two statements are true about the password file? 

A. It records the usernames and passwords of users when granted the DBA role. 

B. It contains the usernames and passwords of users for whom auditing is enabled. 

C. Is used by Oracle to authenticate users for remote database administration. 

D. It records the usernames and passwords of all users when they are added to the OSDBA or OSOPER operating system groups. 

E. It supports the SYSBACKUP, SYSDG, and SYSKM system privileges. 

Answer: C,E 

Q7. You want to capture column group usage and gather extended statistics for better cardinality estimates for the CUSTOMERS table in the SH schema. 

Examine the following steps: 

1. Issue the SELECT DBMS_STATS.CREATE_EXTENDED_STATS (‘SH’, ‘CUSTOMERS’) FROM dual statement. 

2. Execute the DBMS_STATS.SEED_COL_USAGE (null, ‘SH’, 500) procedure. 

3. Execute the required queries on the CUSTOMERS table. 

4. Issue the SELECT DBMS_STATS.REPORT_COL_USAGE (‘SH’, ‘CUSTOMERS’) FROM dual statement. 

Identify the correct sequence of steps. 

A. 3, 2, 1, 4 

B. 2, 3, 4, 1 

C. 4, 1, 3, 2 

D. 3, 2, 4, 1 

Answer:

Explanation: Step 1 (2). Seed column usage Oracle must observe a representative workload, in order to determine the appropriate column groups. Using the new procedure DBMS_STATS.SEED_COL_USAGE, you tell Oracle how long it should observe the workload. Step 2: (3) You don't need to execute all of the queries in your work during this window. You can simply run explain plan for some of your longer running queries to ensure column group information is recorded for these queries. Step 3. (1) Create the column groups At this point you can get Oracle to automatically create the column groups for each of the tables based on the usage information captured during the monitoring window. You simply have to call the DBMS_STATS.CREATE_EXTENDED_STATS function for each table.This function requires just two arguments, the schema name and the table name. From then on, statistics will be maintained for each column group whenever statistics are gathered on the table. 

Note: 

* DBMS_STATS.REPORT_COL_USAGE reports column usage information and records all the SQL operations the database has processed for a given object. 

* The Oracle SQL optimizer has always been ignorant of the implied relationships between data columns within the same table. While the optimizer has traditionally analyzed the distribution of values within a column, he does not collect value-based relationships between columns. 

* Creating extended statisticsHere are the steps to create extended statistics for related table columns withdbms_stats.created_extended_stats: 

1 - The first step is to create column histograms for the related columns.2 – Next, we run dbms_stats.create_extended_stats to relate the columns together. 

Unlike a traditional procedure that is invoked via an execute (“exec”) statement, Oracle extended statistics are created via a select statement. 

Q8. Identify two situations in which the alert log file is updated. 

A. Running a query on a table returns ORA-600: Internal Error. 

B. Inserting a value into a table returns ORA-01722: invalid number. 

C. Creating a table returns ORA-00955: name us already in used by an existing objects. 

D. Inserting a value into a table returns ORA-00001: unique constraint (SYS.OK_TECHP) violated. 

E. Rebuilding an index using ALTER INDEX . . . REBUILD fails with an ORA-01578: ORACLE data block corrupted (file # 14, block # 50) error. 

Answer: A,E 

Explanation: The alert log is a chronological log of messages and errors, and includes the following items: 

*All internal errors (ORA-600), block corruption errors (ORA-1578), and deadlock errors (ORA-60) that occur 

* Administrative operations, such as CREATE, ALTER, and DROP statements and STARTUP, SHUTDOWN, and ARCHIVELOG statements 

* Messages and errors relating to the functions of shared server and dispatcher processes 

* Errors occurring during the automatic refresh of a materialized view 

* The values of all initialization parameters that had nondefault values at the time the database and instance start 

Note: 

* The alert log file (also referred to as the ALERT.LOG) is a chronological log of messages and errors written out by an Oracle Database. Typical messages found in this file is: database startup, shutdown, log switches, space errors, etc. This file should constantly be monitored to detect unexpected messages and corruptions. 

Q9. You are administering a database and you receive a requirement to apply the following restrictions: 

1. A connection must be terminated after four unsuccessful login attempts by user. 

2. A user should not be able to create more than four simultaneous sessions. 

3. User session must be terminated after 15 minutes of inactivity. 

4. Users must be prompted to change their passwords every 15 days. 

How would you accomplish these requirements? 

A. by granting a secure application role to the users 

B. by creating and assigning a profile to the users and setting the REMOTE_OS_AUTHENT parameter to FALSE 

C. By creating and assigning a profile to the users and setting the SEC_MAX_FAILED_LOGIN_ATTEMPTS parameter to 4 

D. By Implementing Fine-Grained Auditing (FGA) and setting the REMOTE_LOGIN_PASSWORD_FILE parameter to NONE. 

E. By implementing the database resource Manager plan and setting the SEC_MAX_FAILED_LOGIN_ATTEMPTS parameters to 4. 

Answer:

Explanation: You can design your applications to automatically grant a role to the user who is trying to log in, provided the user meets criteria that you specify. To do so, you create a secure application role, which is a role that is associated with a PL/SQL procedure (or PL/SQL package that contains multiple procedures). The procedure validates the user: if the user fails the validation, then the user cannot log in. If the user passes the validation, then the procedure grants the user a role so that he or she can use the application. The user has this role only as long as he or she is logged in to the application. When the user logs out, the role is revoked. 

Incorrect: 

Not B: REMOTE_OS_AUTHENT specifies whether remote clients will be authenticated with the value of the OS_AUTHENT_PREFIX parameter. 

Not C, not E: SEC_MAX_FAILED_LOGIN_ATTEMPTS specifies the number of authentication attempts that can be made by a client on a connection to the server process. 

After the specified number of failure attempts, the connection will be automatically dropped by the server process. 

Not D: REMOTE_LOGIN_PASSWORDFILE specifies whether Oracle checks for a password file. 

Values: 

shared 

One or more databases can use the password file. The password file can contain SYS as well as non-SYS users. 

exclusive 

The password file can be used by only one database. The password file can contain SYS as well as non-SYS users. 

none 

Oracle ignores any password file. Therefore, privileged users must be authenticated by the operating system. 

Note: 

The REMOTE_OS_AUTHENT parameter is deprecated. It is retained for backward compatibility only. 

Q10. Examine the current value for the following parameters in your database instance: 

SGA_MAX_SIZE = 1024M 

SGA_TARGET = 700M 

DB_8K_CACHE_SIZE = 124M 

LOG_BUFFER = 200M 

You issue the following command to increase the value of DB_8K_CACHE_SIZE: 

SQL> ALTER SYSTEM SET DB_8K_CACHE_SIZE=140M; 

Which statement is true? 

A. It fails because the DB_8K_CACHE_SIZE parameter cannot be changed dynamically. 

B. It succeeds only if memory is available from the autotuned components if SGA. 

C. It fails because an increase in DB_8K_CACHE_SIZE cannot be accommodated within SGA_TARGET. 

D. It fails because an increase in DB_8K_CACHE_SIZE cannot be accommodated within SGA_MAX_SIZE. 

Answer:

Explanation: * The SGA_TARGET parameter can be dynamically increased up to the value specified for the SGA_MAX_SIZE parameter, and it can also be reduced. 

* Example: 

For example, suppose you have an environment with the following configuration: 

SGA_MAX_SIZE = 1024M SGA_TARGET = 512M DB_8K_CACHE_SIZE = 128M In this example, the value of SGA_TARGET can be resized up to 1024M and can also be reduced until one or more of the automatically sized components reaches its minimum size. The exact value depends on environmental factors such as the number of CPUs on the system. However, the value of DB_8K_CACHE_SIZE remains fixed at all times at 128M 

* DB_8K_CACHE_SIZE Size of cache for 8K buffers 

* For example, consider this configuration: 

SGA_TARGET = 512M DB_8K_CACHE_SIZE = 128M In this example, increasing DB_8K_CACHE_SIZE by 16 M to 144M means that the 16M is taken away from the automatically sized components. Likewise, reducing DB_8K_CACHE_SIZE by 16M to 112M means that the 16M is given to the automatically sized components.