1Z0-062 Premium Bundle

1Z0-062 Premium Bundle

Oracle Database 12c: Installation and Administration Certification Exam

4.5 
(5760 ratings)
0 QuestionsPractice Tests
0 PDFPrint version
May 20, 2024Last update

Oracle 1Z0-062 Free Practice Questions

Q1. Examine the parameters for your database instance: 

NAMETYPE VALUE 

undo_management string AUTO 

undo_retentioninteger 12 00 

undo_tablespace string UNDOTBS1 

You execute the following command: 

SQL> ALTER TABLESPACE undotbs1 RETENTION NOGUARANTEE; 

Which statement is true in this scenario? 

A. Undo data is written to flashback logs after 1200 seconds. 

B. Inactive undo data is retained for 1200 seconds even if subsequent transactions fail due to lack of space in the undo tablespace. 

C. You can perform a Flashback Database operation only within the duration of 1200 seconds. 

D. An attempt is made to keep inactive undo for 1200 seconds but transactions may overwrite the undo before that time has elapsed. 

Answer:

Q2. You upgraded your database from pre-12c to a multitenant container database (CDB) containing pluggable databases (PDBs). 

Examine the query and its output: 

Which two tasks must you perform to add users with SYSBACKUP, SYSDG, and SYSKM privilege to the password file? 

A. Assign the appropriate operating system groups to SYSBACKUP, SYSDG, SYSKM. 

B. Grant SYSBACKUP, SYSDG, and SYSKM privileges to the intended users. 

C. Re-create the password file with SYSBACKUP, SYSDG, and SYSKM privilege and the FORCE argument set to No. 

D. Re-create the password file with SYSBACKUP, SYSDG, and SYSKM privilege, and FORCE arguments set to Yes. 

E. Re-create the password file in the Oracle Database 12c format. 

Answer: B,D 

Explanation: 

* orapwd 

/ You can create a database password file using the password file creation utility, 

ORAPWD. 

The syntax of the ORAPWD command is as follows: 

orapwd FILE=filename [ENTRIES=numusers] [FORCE={y|n}] [ASM={y|n}] 

[DBUNIQUENAME=dbname] [FORMAT={12|legacy}] [SYSBACKUP={y|n}] [SYSDG={y|n}] 

[SYSKM={y|n}] [DELETE={y|n}] [INPUT_FILE=input-fname] 

force - whether to overwrite existing file (optional), 

* v$PWFILE_users / 12c: V$PWFILE_USERS lists all users in the password file, and indicates whether the user has been granted the SYSDBA, SYSOPER, SYSASM, SYSBACKUP, SYSDG, and SYSKM privileges. 

/ 10c: sts users who have been granted SYSDBA and SYSOPER privileges as derived from the password file. 

ColumnDatatypeDescription USERNAMEVARCHAR2(30)The name of the user that is contained in the password file SYSDBAVARCHAR2(5)If TRUE, the user can connect with SYSDBA privileges SYSOPERVARCHAR2(5)If TRUE, the user can connect with SYSOPER privileges 

Incorrect: 

not E: The format of the v$PWFILE_users file is already in 12c format. 

Q3. In your multitenant container database (CDB) containing same pluggable databases (PDBs), you execute the following commands in the root container: 

Which two statements are true? 

A. The C # # ROLE1 role is created in the root database and all the PDBs. 

B. The C # # ROLE1 role is created only in the root database because the container clause is not used. 

C. Privileges are granted to the C##A_ADMIN user only in the root database. 

D. Privileges are granted to the C##A_ADMIN user in the root database and all PDBs. 

E. The statement for granting a role to a user fails because the CONTAINER clause is not used. 

Answer: A,C 

Explanation: * You can include the CONTAINER clause in several SQL statements, such as the CREATE USER, ALTER USER, CREATE ROLE, GRANT, REVOKE, and ALTER SYSTEM statements. * * CREATE ROLE with CONTAINER (optional) clause / CONTAINER = ALL Creates a common role. / CONTAINER = CURRENT Creates a local role in the current PDB. 

Q4. You upgraded from a previous Oracle database version to Oracle Database version to Oracle Database 12c. Your database supports a mixed workload. During the day, lots of insert, update, and delete operations are performed. At night, Extract, Transform, Load (ETL) and batch reporting jobs are run. The ETL jobs perform certain database operations using two or more concurrent sessions. 

After the upgrade, you notice that the performance of ETL jobs has degraded. To ascertain the cause of performance degradation, you want to collect basic statistics such as the level of parallelism, total database time, and the number of I/O requests for the ETL jobs. 

How do you accomplish this? 

A. Examine the Active Session History (ASH) reports for the time period of the ETL or batch reporting runs. 

B. Enable SQL tracing for the queries in the ETL and batch reporting queries and gather diagnostic data from the trace file. 

C. Enable real-time SQL monitoring for ETL jobs and gather diagnostic data from the V$SQL_MONITOR view. 

D. Enable real-time database operation monitoring using the DBMS_SQL_MONITOR.BEGIN_OPERATION function, and then use the DBMS_SQL_MONITOR.REPORT_SQL_MONITOR function to view the required information. 

Answer:

Explanation: * Monitoring database operations Real-Time Database Operations Monitoring enables you to monitor long running database tasks such as batch jobs, scheduler jobs, and Extraction, Transformation, and Loading (ETL) jobs as a composite business operation. This feature tracks the progress of SQL and PL/SQL queries associated with the business operation being monitored. As a DBA or developer, you can define business operations for monitoring by explicitly specifying the start and end of the operation or implicitly with tags that identify the operation. 

Q5. Examine the following query output: 

You issue the following command to import tables into the hr schema: 

$ > impdp hr/hr directory = dumpdir dumpfile = hr_new.dmp schemas=hr TRANSFORM=DISABLE_ARCHIVE_LOGGING: Y 

Which statement is true? 

A. All database operations performed by the impdp command are logged. 

B. Only CREATE INDEX and CREATE TABLE statements generated by the import are logged. 

C. Only CREATE TABLE and ALTER TABLE statements generated by the import are logged. 

D. None of the operations against the master table used by Oracle Data Pump to coordinate its activities are logged. 

Answer:

Explanation: Oracle Data Pump disable redo logging when loading data into tables and when creating indexes. The new TRANSFORM option introduced in data pumps import provides the flexibility to turn off the redo generation for the objects during the course of import. The Master Table is used to track the detailed progress information of a Data Pump job. The Master Table is created in the schema of the current user running the Pump Dump export or import, and it keeps tracks of lots of detailed information. 

Q6. In which two scenarios do you use SQL* Loader to load data? 

A. Transform the data while it is being loaded into the database. 

B. Use transparent parallel processing without having to split the external data first. 

C. Load data into multiple tables during the same load statement. 

D. Generate unique sequential key values in specified columns. 

Answer: A,D 

Explanation: You can use SQL*Loader to do the following: 

/ (A) Manipulate the data before loading it, using SQL functions. 

/ (D) Generate unique sequential key values in specified columns. 

etc: 

/ Load data into multiple tables during the same load session. 

/ Load data across a network. This means that you can run the SQL*Loader client on a different system from the one that is running the SQL*Loader server. 

/ Load data from multiple datafiles during the same load session. 

/Specify the character set of the data. 

/ Selectively load data (you can load records based on the records' values). 

/Use the operating system's file system to access the datafiles. 

/ Load data from disk, tape, or named pipe. 

/ Generate sophisticated error reports, which greatly aid troubleshooting. 

/ Load arbitrarily complex object-relational data. 

/ Use secondary datafiles for loading LOBs and collections. 

/ Use either conventional or direct path loading. While conventional path loading is very flexible, direct path loading provides superior loading performance. 

Note: 

* SQL*Loader loads data from external files into tables of an Oracle database. It has a powerful data parsing engine that puts little limitation on the format of the data in the datafile. 

Q7. You are connected using SQL* Plus to a multitenant container database (CDB) with SYSDBA privileges and execute the following sequence statements: 

What is the result of the last SET CONTAINER statement and why is it so? 

A. It succeeds because the PDB_ADMIN user has the required privileges. 

B. It fails because common users are unable to use the SET CONTAINER statement. 

C. It fails because local users are unable to use the SET CONTAINER statement. 

D. If fails because the SET CONTAINER statement cannot be used with PDB$SEED as the target pluggable database (PDB). 

Answer:

Q8. You execute the commands: 

SQL>CREATE USER sidney 

IDENTIFIED BY out_standing1 

DEFAULT TABLESPACE users 

QUOTA 10M ON users 

TEMPORARY TABLESPACE temp 

ACCOUNT UNLOCK; 

SQL> GRANT CREATE SESSION TO Sidney; 

Which two statements are true? 

A. The create user command fails if any role with the name Sidney exists in the database. 

B. The user sidney can connect to the database instance but cannot perform sort operations because no space quota is specified for the temp tablespace. 

C. The user sidney is created but cannot connect to the database instance because no profile is 

D. The user sidney can connect to the database instance but requires relevant privileges to create objects in the users tablespace. 

E. The user sidney is created and authenticated by the operating system. 

Answer: A,E 

Q9. You have installed two 64G flash devices to support the Database Smart Flash Cache feature on your database server that is running on Oracle Linux. 

You have set the DB_SMART_FLASH_FILE parameter: 

DB_FLASH_CACHE_FILE= ‘/dev/flash_device_1 ‘,’ /dev/flash_device_2’ 

How should the DB_FLASH_CACHE_SIZE be configured to use both devices? 

A. Set DB_FLASH_CACHE_ZISE = 64G. 

B. Set DB_FLASH_CACHE_ZISE = 64G, 64G 

C. Set DB_FLASH_CACHE_ZISE = 128G. 

D. DB_FLASH_CACHE_SIZE is automatically configured by the instance at startup. 

Answer:

Explanation: * Smart Flash Cache concept is not new in Oracle 12C - DB Smart Flash Cache in Oracle 11g. 

In this release Oracle has made changes related to both initialization parameters used by DB Smart Flash cache. Now you can define many files|devices and its sizes for “Database Smart Flash Cache” area. In previous releases only one file|device could be defined. 

DB_FLASH_CACHE_FILE = /dev/sda, /dev/sdb, /dev/sdc 

DB_FLASH_CACHE_SIZE = 32G, 32G, 64G 

So above settings defines 3 devices which will be in use by “DB Smart Flash Cache” 

/dev/sda – size 32G /dev/sdb – size 32G /dev/sdc – size 64G New view V$FLASHFILESTAT – it’s used to determine the cumulative latency and read counts of each file|device and compute the average latency 

Q10. Examine the structure of the sales table, which is stored in a locally managed tablespace with Automatic Segment Space Management (ASSM) enabled. 

NameNull?Type 

PROD_IDNOT NULL NUMBER 

CUST_IDNOT NULL NUMBER 

TIME_IDNOT NULL DATE 

CHANNEL_IDNOT NULL NUMBER 

PROMO_IDNOT NULL NUMBER 

QUANT I TY___S OL DNOT NULL NUMBER (10, 2) AMOUNT SOLDNOT NULL NUMBER (10, 2) 

You want to perform online segment shrink to reclaim fragmented free space below the high water mark. 

What should you ensure before the start of the operation? 

A. Row movement is enabled. 

B. Referential integrity constraints for the table are disabled. 

C. No queries are running on this table. 

D. Extra disk space equivalent to the size of the segment is available in the tablespace. 

E. No pending transaction exists on the table. 

Answer:

Q11. Identify three benefits of Unified Auditing. 

A. Decreased use of storage to store audit trail rows in the database. 

B. It improves overall auditing performance. 

C. It guarantees zero-loss auditing. 

D. The audit trail cannot be easily modified because it is read-only. 

E. It automatically audits Recovery Manager (RMAN) events. 

Answer: A,B,E 

Explanation: A: Starting with 12c, Oracle has unified all of the auditing types into one single unit called Unified auditing. You don’t have to turn on or off all of the different auidting types individually and as a matter of fact auditing is enabled by default right out of the box. The AUD$ and FGA$ tables have been replaced with one single audit trail table. All of the audit data is now stored in Secure Files table thus improving the overall management aspects of audit data itself. 

B: Further the audit data can also be buffered solving most of the common performance related problems seen on busy environments. 

E: Unified Auditing is able to collect audit data for Fine Grained Audit, RMAN, Data Pump, Label Security, Database Vault and Real Application Security operations. 

Note: 

* Benefits of the Unified Audit Trail 

The benefits of a unified audit trail are many: / (B) Overall auditing performance is greatly improved. The default mode that unified audit works is Queued Write mode. In this mode, the audit records are batched in SGA queue and is persisted in a periodic way. Because the audit records are written to SGA queue, there is a significant performance improvement. 

/ The unified auditing functionality is always enabled and does not depend on the initialization parameters that were used in previous releases 

/ (A) The audit records, including records from the SYS audit trail, for all the audited components of your Oracle Database installation are placed in one location and in one format, rather than your having to look in different places to find audit trails in varying formats. This consolidated view enables auditors to co-relate audit information from different components. For example, if an error occurred during an INSERT statement, standard auditing can indicate the error number and the SQL that was executed. Oracle Database Vault-specific information can indicate whether this error happened because of a command rule violation or realm violation. Note that there will be two audit records with a distinct AUDIT_TYPE. With this unification in place, SYS audit records appear with AUDIT_TYPE set to Standard Audit. 

/ The management and security of the audit trail is also improved by having it in single audit trail. 

/ You can create named audit policies that enable you to audit the supported components listed at the beginning of this section, as well as SYS administrative users. Furthermore, you can build conditions and exclusions into your policies. 

* Oracle Database 12c Unified Auditing enables selective and effective auditing inside the Oracle database using policies and conditions. The new policy based syntax simplifies management of auditing within the database and provides the ability to accelerate auditing based on conditions. 

* The new architecture unifies the existing audit trails into a single audit trail, enabling simplified management and increasing the security of audit data generated by the database. 

Q12. You want to capture column group usage and gather extended statistics for better cardinality estimates for the CUSTOMERS table in the SH schema. 

Examine the following steps: 

1. Issue the SELECT DBMS_STATS.CREATE_EXTENDED_STATS (‘SH’, ‘CUSTOMERS’) FROM dual statement. 

2. Execute the DBMS_STATS.SEED_COL_USAGE (null, ‘SH’, 500) procedure. 

3. Execute the required queries on the CUSTOMERS table. 

4. Issue the SELECT DBMS_STATS.REPORT_COL_USAGE (‘SH’, ‘CUSTOMERS’) FROM dual statement. 

Identify the correct sequence of steps. 

A. 3, 2, 1, 4 

B. 2, 3, 4, 1 

C. 4, 1, 3, 2 

D. 3, 2, 4, 1 

Answer:

Explanation: Step 1 (2). Seed column usage Oracle must observe a representative workload, in order to determine the appropriate column groups. Using the new procedure DBMS_STATS.SEED_COL_USAGE, you tell Oracle how long it should observe the workload. Step 2: (3) You don't need to execute all of the queries in your work during this window. You can simply run explain plan for some of your longer running queries to ensure column group information is recorded for these queries. Step 3. (1) Create the column groups At this point you can get Oracle to automatically create the column groups for each of the tables based on the usage information captured during the monitoring window. You simply have to call the DBMS_STATS.CREATE_EXTENDED_STATS function for each table.This function requires just two arguments, the schema name and the table name. From then on, statistics will be maintained for each column group whenever statistics are gathered on the table. 

Note: 

* DBMS_STATS.REPORT_COL_USAGE reports column usage information and records all the SQL operations the database has processed for a given object. 

* The Oracle SQL optimizer has always been ignorant of the implied relationships between data columns within the same table. While the optimizer has traditionally analyzed the distribution of values within a column, he does not collect value-based relationships between columns. 

* Creating extended statisticsHere are the steps to create extended statistics for related table columns withdbms_stats.created_extended_stats: 

1 - The first step is to create column histograms for the related columns.2 – Next, we run dbms_stats.create_extended_stats to relate the columns together. 

Unlike a traditional procedure that is invoked via an execute (“exec”) statement, Oracle extended statistics are created via a select statement. 

Q13. Which three statements are true concerning the multitenant architecture? 

A. Each pluggable database (PDB) has its own set of background processes. 

B. A PDB can have a private temp tablespace. 

C. PDBs can share the sysaux tablespace. 

D. Log switches occur only at the multitenant container database (CDB) level. 

E. Different PDBs can have different default block sizes. 

F. PDBs share a common system tablespace. 

G. Instance recovery is always performed at the CDB level. 

Answer: B,D,G 

Explanation: B: 

* A PDB would have its SYSTEM, SYSAUX, TEMP tablespaces.It can also contains other 

user created tablespaces in it. 

* There is one default temporary tablespace for the entire CDB. However, you can create additional temporary tablespaces in individual PDBs. 

D: 

* There is a single redo log and a single control file for an entire CDB 

* A log switch is the point at which the database stops writing to one redo log file and begins writing to another. Normally, a log switch occurs when the current redo log file is completely filled and writing must continue to the next redo log file. 

G: instance recovery The automatic application of redo log records to uncommitted data blocks when an database instance is restarted after a failure. 

Incorrect: Not A: 

* There is one set of background processes shared by the root and all PDBs. – 

* High consolidation density. The many pluggable databases in a single container database share its memory and background processes, letting you operate many more pluggable databases on a particular platform than you can single databases that use the old architecture. 

Not C: There is a separate SYSAUX tablespace for the root and for each PDB. 

Not F: There is a separate SYSTEM tablespace for the root and for each PDB. -

Q14. Which two statements are true when row archival management is enabled? 

A. The ORA_ARCHIVE_STATE column visibility is controlled by the ROW ARCHIVAL VISIBILITY session parameter. 

B. The ORA_ARCHIVE_STATE column is updated manually or by a program that could reference activity tracking columns, to indicate that a row is no longer considered active. 

C. The ROW ARCHIVAL VISIBILITY session parameter defaults to active rows only. 

D. The ORA_ARCHIVE_STATE column is visible if referenced in the select list of a query. 

E. The ORA_ARCHIVE_STATE column is updated automatically by the Oracle Server based on activity tracking columns, to Indicate that a row is no longer considered active. 

Answer: A,B 

Explanation: A: Below we see a case where we set the row archival visibility parameter to "all" thereby allowing us to see all of the rows that have been logically deleted: 

alter session set row archival visibility = all; 

We can then turn-on row invisibility back on by changing row archival visibility = "active": alter session set row archival visibility = all; 

B: To use ora_archive_state as an alternative to deleting rows, you need the following settings and parameters: 

1. Create the table with the row archival clause 

create table mytab (col1 number, col2 char(200)) row archival; 

2. Now that the table is marked as row archival, you have two methods for removing rows, a permanent solution with the standard delete DML, plus the new syntax where you set ora_archive_state to a non-zero value: 

update mytab set ora_archive_state=2 where col2='FRED' 

3. To make "invisible rows" visible again, you simply set the rows ora_archive_state to zero: 

update mytab set ora_archive_state=0 where col2='FRED' Note: 

* Starting in Oracle 12c, Oracle provides a new feature that allow you to "logically delete" a row in a table without physically removing the row. This effectively makes deleted rows "invisible" to all SQL and DML, but they can be revealed at any time, providing a sort of "instant" rollback method. 

To use ora_archive_state as an alternative to deleting rows. 

Q15. A redaction policy was added to the SAL column of the SCOTT.EMP table:

 

All users have their default set of system privileges. 

For which three situations will data not be redacted? 

A. SYS sessions, regardless of the roles that are set in the session 

B. SYSTEM sessions, regardless of the roles that are set in the session 

C. SCOTT sessions, only if the MGR role is set in the session 

D. SCOTT sessions, only if the MGR role is granted to SCOTT 

E. SCOTT sessions, because he is the owner of the table 

F. SYSTEM session, only if the MGR role is set in the session 

Answer: A,D,F 

Explanation: 

* SYS_CONTEXT This is a twist on the SYS_CONTEXT function as it does not use USERENV. With this usage SYS_CONTEXT queries the list of the user's current default roles and returns TRUE if the role is granted. 

Example: 

SYS_CONTEXT('SYS_SESSION_ROLES', 'SUPERVISOR') 

conn scott/tiger@pdborcl 

SELECT sys_context('SYS_SESSION_ROLES', 'RESOURCE') 

FROM dual; 

SYS_CONTEXT('SYS_SESSION_ROLES','SUPERVISOR') 

FALSE 

conn sys@pdborcl as sysdba 

GRANT resource TO scott; 

conn scott/tiger@pdborcl SELECT sys_context('SYS_SESSION_ROLES', 'RESOURCE') FROM dual; SYS_CONTEXT('SYS_SESSION_ROLES','SUPERVISOR') TRUE 

Q16. Which statement is true concerning dropping a pluggable database (PDB)? 

A. The PDB must be open in read-only mode. 

B. The PDB must be in mount state. 

C. The PDB must be unplugged. 

D. The PDB data files are always removed from disk. 

E. A dropped PDB can never be plugged back into a multitenant container database (CDB). 

Answer:

START 1Z0-062 EXAM