aiotestking uk

1Z0-062 Exam Questions - Online Test


1Z0-062 Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

Q1. Examine the contents of SQL loader control file: 

Which three statements are true regarding the SQL* Loader operation performed using the control file? 

A. An EMP table is created if a table does not exist. Otherwise, if the EMP table is appended with the loaded data. 

B. The SQL* Loader data file myfile1.dat has the column names for the EMP table. 

C. The SQL* Loader operation fails because no record terminators are specified. 

D. Field names should be the first line in the both the SQL* Loader data files. 

E. The SQL* Loader operation assumes that the file must be a stream record format file with the normal carriage return string as the record terminator. 

Answer: A,B,E 

Explanation: A: The APPEND keyword tells SQL*Loader to preserve any preexisting data in the table. Other options allow you to delete preexisting data, or to fail with an error if the table is not empty to begin with. 

B (not D): Note: 

* SQL*Loader-00210: first data file is empty, cannot process the FIELD NAMES record 

Cause: The data file listed in the next message was empty. Therefore, the FIELD NAMES 

FIRST FILE directive could not be processed. 

Action: Check the listed data file and fix it. Then retry the operation 

E: 

* A comma-separated values (CSV) (also sometimes called character-separated values, because the separator character does not have to be a comma) file stores tabular data (numbers and text) in plain-text form. Plain text means that the file is a sequence of characters, with no data that has to be interpreted instead, as binary numbers. A CSV file consists of any number of records, separated by line breaks of some kind; each record consists of fields, separated by some other character or string, most commonly a literal comma or tab. Usually, all records have an identical sequence of fields. 

* Fields with embedded commas must be quoted. 

Example: 

1997,Ford,E350,"Super, luxurious truck" 

Note: 

* SQL*Loader is a bulk loader utility used for moving data from external files into the Oracle database. 

Q2. You executed the following command to create a password file in the database server: 

$ orapwd file = orapworcl entries = 5 ignorecase=N 

Which statement describes the purpose of the above password file? 

A. It records usernames and passwords of users when granted the DBA role 

B. It contains usernames and passwords of users for whom auditing is enabled 

C. It is used by Oracle to authenticate users for remote database administrator 

D. It records usernames and passwords of all users when they are added to OSDBA or OSOPER operating groups 

Answer:

Q3. You want to capture column group usage and gather extended statistics for better cardinality estimates for the CUSTOMERS table in the SH schema. 

Examine the following steps: 

1. Issue the SELECT DBMS_STATS.CREATE_EXTENDED_STATS (‘SH’, ‘CUSTOMERS’) FROM dual statement. 

2. Execute the DBMS_STATS.SEED_COL_USAGE (null, ‘SH’, 500) procedure. 

3. Execute the required queries on the CUSTOMERS table. 

4. Issue the SELECT DBMS_STATS.REPORT_COL_USAGE (‘SH’, ‘CUSTOMERS’) FROM dual statement. 

Identify the correct sequence of steps. 

A. 3, 2, 1, 4 

B. 2, 3, 4, 1 

C. 4, 1, 3, 2 

D. 3, 2, 4, 1 

Answer:

Explanation: Step 1 (2). Seed column usage Oracle must observe a representative workload, in order to determine the appropriate column groups. Using the new procedure DBMS_STATS.SEED_COL_USAGE, you tell Oracle how long it should observe the workload. Step 2: (3) You don't need to execute all of the queries in your work during this window. You can simply run explain plan for some of your longer running queries to ensure column group information is recorded for these queries. Step 3. (1) Create the column groups At this point you can get Oracle to automatically create the column groups for each of the tables based on the usage information captured during the monitoring window. You simply have to call the DBMS_STATS.CREATE_EXTENDED_STATS function for each table.This function requires just two arguments, the schema name and the table name. From then on, statistics will be maintained for each column group whenever statistics are gathered on the table. 

Note: 

* DBMS_STATS.REPORT_COL_USAGE reports column usage information and records all the SQL operations the database has processed for a given object. 

* The Oracle SQL optimizer has always been ignorant of the implied relationships between data columns within the same table. While the optimizer has traditionally analyzed the distribution of values within a column, he does not collect value-based relationships between columns. 

* Creating extended statisticsHere are the steps to create extended statistics for related table columns withdbms_stats.created_extended_stats: 

1 - The first step is to create column histograms for the related columns.2 – Next, we run dbms_stats.create_extended_stats to relate the columns together. 

Unlike a traditional procedure that is invoked via an execute (“exec”) statement, Oracle extended statistics are created via a select statement. 

Q4. Examine the following command; 

ALTER SYSTEM SET enable_ddl_logging = TRUE; 

Which statement is true? 

A. Only the data definition language (DDL) commands that resulted in errors are logged in the alert log file. 

B. All DDL commands are logged in the alert log file. 

C. All DDL commands are logged in a different log file that contains DDL statements and their execution dates. 

D. Only DDL commands that resulted in the creation of new segments are logged. 

E. All DDL commands are logged in XML format in the alert directory under the Automatic Diagnostic Repository (ADR) home. 

Answer:

Explanation: Once DDL logging is turned on, every DDL command will be logged in the alert log file and also the log.xml file. 

Note: 

* By default Oracle database does not log any DDL operations performed by any user. The default settings for auditing only logs DML operations. 

* Oracle 12c DDL Logging – ENABLE_DDL_LOGGING 

The first method is by using the enabling a DDL logging feature built into the database. By default it is turned off and you can turn it on by setting the value of ENABLE_DDL_LOGGING initialization parameter to true. 

* We can turn it on using the following command. The parameter is dynamic and you can 

turn it on/off on the go. 

SQL> alter system set ENABLE_DDL_LOGGING=true; 

System altered. Elapsed: 00:00:00.05 SQL> 

Once it is turned on, every DDL command will be logged in the alert log file and also the log.xml file. 

Q5. Which three statements are true concerning unplugging a pluggable database (PDB)? 

A. The PDB must be open in read only mode. 

B. The PDB must be dosed. 

C. The unplugged PDB becomes a non-CDB. 

D. The unplugged PDB can be plugged into the same multitenant container database (CDB) 

E. The unplugged PDB can be plugged into another CDB. 

F. The PDB data files are automatically removed from disk. 

Answer: B,D,E 

Explanation: B, not A: The PDB must be closed before unplugging it. 

D: An unplugged PDB contains data dictionary tables, and some of the columns in these encode information in an endianness-sensitive way. There is no supported way to handle the conversion of such columns automatically. This means, quite simply, that an unplugged PDB cannot be moved across an endianness difference. 

E (not F): To exploit the new unplug/plug paradigm for patching the Oracle version most effectively, the source and destination CDBs should share a filesystem so that the PDB’s datafiles can remain in place. 

Reference: Oracle White Paper, Oracle Multitenant 

Q6. Which two statements are true about the logical storage structure of an Oracle database? 

A. An extent contains data blocks that are always physically contiguous on disk. 

B. An extent can span multiple segments, 

C. Each data block always corresponds to one operating system block. 

D. It is possible to have tablespaces of different block sizes. 

E. A data block is the smallest unit of I/O in data files. 

Answer: B,D 

Reference: http://docs.oracle.com/cd/E11882_01/server.112/e40540/logical.htm#CNCPT250 

Q7. You conned using SQL Plus to the root container of a multitenant container database (CDB) with SYSDBA privilege. 

The CDB has several pluggable databases (PDBs) open in the read/write mode. 

There are ongoing transactions in both the CDB and PDBs. 

What happens alter issuing the SHUTDOWN TRANSACTIONAL statement? 

A. The shutdown proceeds immediately. 

The shutdown proceeds as soon as all transactions in the PDBs are either committed or rolled hack. 

B. The shutdown proceeds as soon as all transactions in the CDB are either committed or rolled back. 

C. The shutdown proceeds as soon as all transactions in both the CDB and PDBs are either committed or rolled back. 

D. The statement results in an error because there are open PDBs. 

Answer:

Explanation: * SHUTDOWN [ABORT | IMMEDIATE | NORMAL | TRANSACTIONAL [LOCAL]] 

Shuts down a currently running Oracle Database instance, optionally closing and dismounting a database. If the current database is a pluggable database, only the pluggable database is closed. The consolidated instance continues to run. 

Shutdown commands that wait for current calls to complete or users to disconnect such as SHUTDOWN NORMAL and SHUTDOWN TRANSACTIONAL have a time limit that the SHUTDOWN command will wait. If all events blocking the shutdown have not occurred within the time limit, the shutdown command cancels with the following message: 

ORA-01013: user requested cancel of current operation 

* If logged into a CDB, shutdown closes the CDB instance. 

To shutdown a CDB or non CDB, you must be connected to the CDB or non CDB instance that you want to close, and then enter 

SHUTDOWN 

Database closed. 

Database dismounted. 

Oracle instance shut down. 

To shutdown a PDB, you must log into the PDB to issue the SHUTDOWN command. 

SHUTDOWN 

Pluggable Database closed. 

Note: 

* Prerequisites for PDB Shutdown 

When the current container is a pluggable database (PDB), the SHUTDOWN command can only be used if: 

The current user has SYSDBA, SYSOPER, SYSBACKUP, or SYSDG system privilege. 

The privilege is either commonly granted or locally granted in the PDB. 

The current user exercises the privilege using AS SYSDBA, AS SYSOPER, AS 

SYSBACKUP, or AS SYSDG at connect time. To close a PDB, the PDB must be open. 

Q8. In your multitenant container database (CDB) with two pluggable database (PDBs). You want to create a new PDB by using SQL Developer. 

Which statement is true? 

A. The CDB must be open. 

B. The CDB must be in the mount stage. 

C. The CDB must be in the nomount stage. 

D. Alt existing PDBs must be closed. 

Answer:

Explanation: 

* Creating a PDB Rather than constructing the data dictionary tables that define an empty PDB from scratch, and then populating its Obj$ and Dependency$ tables, the empty PDB is created when the CDB is created. (Here, we use empty to mean containing no customer-created artifacts.) It is referred to as the seed PDB and has the name PDB$Seed. Every CDB non-negotiably contains a seed PDB; it is non-negotiably always open in read-only mode. This has no conceptual significance; rather, it is just an optimization device. The create PDB operation is implemented as a special case of the clone PDB operation. The size of the seed PDB is only about 1 gigabyte and it takes only a few seconds on a typical machine to copy it. 

Q9. Which three statements are true about SQL plan directives? 

A. They are tied to a specific statement or SQL ID. 

B. They instruct the maintenance job to collect missing statistics or perform dynamic sampling to generate a more optimal plan. 

C. They are used to gather only missing statistics. 

D. They are created for a query expression where statistics are missing or the cardinality estimates by the optimizer are incorrect. 

E. They instruct the optimizer to create only column group statistics. 

F. Improve plan accuracy by persisting both compilation and execution statistics in the SYSAUX tablespace. 

Answer: B,D,E 

Explanation: During SQL execution, if a cardinality misestimate occurs, then the database creates SQL plan directives. During SQL compilation, the optimizer examines the query corresponding to the directive to determine whether missing extensions or histograms exist (D). The optimizer records any missing extensions. Subsequent DBMS_STATS calls collect statistics for the extensions. 

The optimizer uses dynamic sampling whenever it does not have sufficient statistics corresponding to the directive. (B, not C) 

E: Currently, the optimizer monitors only column groups. The optimizer does not create an extension on expressions. 

Incorrect: 

Not A: SQL plan directives are not tied to a specific SQL statement or SQL ID. 

Note: 

* A SQL plan directive is additional information and instructions that the optimizer can use to generate a more optimal plan. For example, a SQL plan directive can instruct the optimizer to record a missing extension. 

Q10. Which two statements are true about extents? 

A. Blocks belonging to an extent can be spread across multiple data files. 

B. Data blocks in an extent are logically contiguous but can be non-contiguous on disk. 

C. The blocks of a newly allocated extent, although free, may have been used before. 

D. Data blocks in an extent are automatically reclaimed for use by other objects in a tablespaee when all the rows in a table are deleted. 

Answer: B,C