Disclaimer

These scripts come without warranty of any kind. Use them at your own risk. I assume no liability for the accuracy, correctness, completeness, or usefulness of any information provided by this site nor for any sort of damages using these scripts may cause.

Thursday, December 22, 2011

Adding Standby Redo Log and Resolving Data guard Error

ORA-16801: redo transport-related property is inconsistent with database Settings

Data Guard Setup: Primary with Two Standbys
DB Version: 10.2.0.4.0 64Bit
OS: Enterprise Linux 5.3 64Bit
Data Guard Broker: Yes
Redo Transport Mode: ASYNC

Second standby database created using rman duplicate command. Started the managed recovery process and logs are started applying. Data Guard broker enabled for this database.  While enabling & verifying the configuration for this database, I got the following error.

ORA-16801: redo transport-related property is inconsistent with database setting
ORA-16715: redo transport-related property LogXptMode of standby database "stby2" is inconsistent

Note: I tried to add Standby Redo Log (SRL) immediately after  creating standby but before starting the managed recovery, but I got the error ORA-301 signalled during: ALTER DATABASE ADD STANDBY LOGFILE GROUP  and not able to add the standby redo logs.

I performed the following steps to resolve this issue.

1.       Stop the redo apply on the standby database
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL
2.       Create the Standby Redo Log on standby

a) The size of the standby redo log files must match the size of primary database online redo log files. Find the size of the primary database online redo log files. Number of Standby Redo Logs should be at least one greater than number online redo logsSQL> select bytes from v$log;
   BYTES
   ----------
  52428800
  52428800
  52428800
Here the size of my online redo log files is 50M.

b) Use the following query to find the details of the log files in the primary database.
SQL> select group#,type,member from v$logfile;
    GROUP# TYPE    MEMBER
---------- ------- --------------------------------------------------
         1 ONLINE  /u03/oradata/prod_redo01a.log
         1 ONLINE  /u03/oradata/prod_redo01b.log
         2 ONLINE  /u03/oradata/prod_redo02a.log
         2 ONLINE  /u03/oradata/prod_redo02b.log
         3 ONLINE  /u03/oradata/prod_redo03a.log
         3 ONLINE  /u03/oradata/prod_redo03b.log
         4 STANDBY /u03/oradata/prod_redo04a.log
         4 STANDBY /u03/oradata/prod_redo04b.log
         5 STANDBY /u03/oradata/prod_redo05a.log
         5 STANDBY /u03/oradata/prod_redo05b.log
         6 STANDBY /u03/oradata/prod_redo06a.log

    GROUP# TYPE    MEMBER
---------- ------- --------------------------------------------------
         6 STANDBY /u03/oradata/prod_redo06b.log
         7 STANDBY /u03/oradata/prod_redo07a.log
         7 STANDBY /u03/oradata/prod_redo07b.log

From above output you can see that I already have standby redo logs on primary.

c) Now I have created the standby redo log files in the standby database.
alter database add standby logfile group 4 ('/u03/oradata/prod_redo04a.log’, '/u03/oradata/prod_redo04b.log’) size 50M ;
alter database add standby logfile group 5 ('/u03/oradata/prod_redo05a.log’, ‘/u03/oradata/oas2/prod_redo05b.log’) size 50M;
alter database add standby logfile group 6 ('/u03/oradata/oas2/prod_redo06a.log’, ‘/u03/oradata/oas2/prod_redo06b.log’) size 50M;
alter database add standby logfile group 7 ('/u03/oradata/oas2/prod_redo07a.log’, ‘/u03/oradata/oas2/prod_redo07b.log’) size 50M;

3.       Start the log apply

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

4.     Enable second standby (SSTD)
DGMGRL> enable database stdby2;
5.       Verify the configuration
DGMGRL> show configuration;

Friday, December 2, 2011

Steps to add/increase the swap partition on Linux (Using Logical Volume).

In certain situations you may need to increase the swap space on your linux server. For example, installing new version of Oracle on an existing linux server.

Linux has two types of swap space:
1) The swap partition. The swap partition is an independent section of the hard disk used solely for swapping; no other files can exist there.

2) The swap file.  The swap file is a special file in the filesystem that exists with your other files.
Below steps are showing how to increase space swap space by adding another swap partition using logical volume.

All these steps has to be done as root user.

1. Create a logical volume
   
    Pre-req: Enough free space must be available on volume group.


 a) open logical volume manager using system-config-lvm
 b) Click on logical view
 c) Click on "Create New Logical Volum Button"
 d) Give the LV name like "swap2"
 e) Specify the size required in GBs
 f) Keep all other values unchanged and click OK to create volume
 g) Note down the new logical volume full path (ex: /dev/vgdata/swap2) from voluem properties and close the GUI.


2. Format the newly created volume with swap file system

   a) # mkswap </full/path to/new volume>
   ex: mkswap /dev/vgdata/swap2


3. Activate the new swap
     a) # swapon </full/path to/new volume>
  ex: swapon /dev/vgdata/swap2


4. Add the following line to fstab to persist swap configuration after re-boot.
   </full/path to/new volume> swap swap defaults 0 0
 ex: /dev/vgdata/swap2 swap swap defaults 0 0

5. verify the swap
  
a) free -m
 b) cat /proc/swaps



Saturday, August 13, 2011

Quick Reference on Database Characterset.

Recently i was upgrading the Oracle apps 11i to R12 and the customer want Arabic NLS. So i need to convert my database characterset form US7ASCII to a characterset that supports Arabic. I have confused which one should i use, UTF8 or AL32UTF8.  I did some reserch on oracle support and decided to use AL32UTF8.  Below given points helped me to make that decision.


The default UTF-8 (Unicode) characterset for 9i/10G is AL32UTF8, however this characterset is NOT recognized by any pre-9i clients/server systems.

Recommend that you use UTF8 instead of AL32UTF8 as database characterset if you have 8i (or older) servers and clients connecting to the 9i/10g system
until you can upgrade the older versions to 9i or higher.
UTF8 is Unicode revision 3.0 in 8.1.7 and up. AL32UTF8 is updated with newer Unicode versions in each major release

Besides the difference in Unicode version the "big difference" is that AL32UTF8 has build in support for "Surrogate Pairs", also known as "Surrogate characters"
or "Supplementary characters". Practically this means that in 99% of the cases you can use UTF8 instead of AL32UTF8 without any problem.
There is no performance difference between UTF8 and AL32UTF8. But seen UTF8 is NOT updated with newer Unicode versions we however suggest to use AL32UTF8 when possible.

Summary:
If you use 8.1 or 8.0 clients or servers or connect to 8.1 or 8.0 databases then use UTF8 as NLS_CHARACTERSET for the 9i (or up) databases, otherwise
(unless the application vendor explicit mentions UTF8 of course) use AL32UTF8.

For an Unicode database Oracle does not need "Unicode support" from the OS where the database is running on because the Oracle AL32UTF8 implementation is not depending on OS features
There is also no need to "install Unicode" or so for the Oracle database/client software, all character sets known in a database version , and this includes
Unicode character sets, are always installed. You simply cannot choose to not install them
If your current Oracle version is 8.1.7 or lower then it's best to upgrade first to a higher release, mainly because
a) you then can use AL32UTF8 (not possible in 8i)
b) Csscan has a few issues in 817 who might provoke confusion.
If your current Oracle version is 9i or up then both (before or after) are a good choice, it simply depends on your preference or needed application changes
Storage.
AL32UTF8 is a varying width characterset, which means that the code for 1 character can be 1 , 2 , 3 or 4 bytes long. This is a big difference with
character sets like WE8ISO8559P1 or WE8MSWIN1252 where 1 character is always 1 byte.

Tuesday, August 2, 2011

Salvage Data from corrupted Oracle Table

Recently, when i query a table with full table scan it gives error and i failing.
It given error ORA-8103 object no longer exists
Then i tried to do analyze the table using validate structure option, that also given same error.
SO it look like there is some kond of corruption. I restored some old backup and checked the table again, that also reported the same error.
Now i will not be able to restore the table from the backup as error persist for long time.
I have two indexs attached to this table, so I tried analyzing the Indexes and that also reported same error.
Then i tried to repair the table using DBMS_REPAIR package and it crashed with ORA-600 error.
Now i have only one option left, somehow extract the data that can be retrived from the table.
Oracle is providing a method to retrive recoverable rows by getting rowid of the row using index. Recovered rows will be insert into new table.

table data can be saved by skipping the blocks that are causing the ORA-8103 error:
Following are the step i performed to recover data.
Disclaimer: This is just for information and accademic purpose only.

Pre-requisite: Table must have an indexed column.
Step 1: Create a new table structure similar to original table
Create table <new table name> as select * from <original table name>
where 1<>1
Step 2: Create a table to store bad row's rowid and error numnber.
create table bad_rows (row_id rowid
                      ,oracle_error_code number);
Step 3: Recover Data by executing below pl/sql block (Replace the table names, columns as per your need)
set serveroutput on
DECLARE
  TYPE RowIDTab IS TABLE OF ROWID INDEX BY BINARY_INTEGER;
  CURSOR c1 IS select /*+ index(tab1) */ rowid
  from <original table name> tab1
  where <indexed column> is NOT NULL;
  r RowIDTab;
  rows NATURAL := 20000;
  bad_rows number := 0 ;
  errors number;
  error_code number;
  myrowid rowid;
BEGIN
  OPEN c1;
  LOOP
   FETCH c1 BULK COLLECT INTO r LIMIT rows;
   EXIT WHEN r.count=0;
   BEGIN
    FORALL i IN r.FIRST..r.LAST SAVE EXCEPTIONS
     insert into <new table name>
     select /*+ ROWID(A) */ <list of columns from table (ie col1, col2,..)>
     from <original table name> A where rowid = r(i);
   EXCEPTION
   when OTHERS then
    BEGIN
     errors := SQL%BULK_EXCEPTIONS.COUNT;
     FOR err1 IN 1..errors LOOP
      error_code := SQL%BULK_EXCEPTIONS(err1).ERROR_CODE;
      if error_code in (1410, 8103) then
       myrowid := r(SQL%BULK_EXCEPTIONS(err1).ERROR_INDEX);
       bad_rows := bad_rows + 1;
       insert into bad_rows values(myrowid, error_code);
      else
       raise;
      end if;
     END LOOP;
    END;
   END;
   commit;
  END LOOP;
  commit;
  CLOSE c1;
  dbms_output.put_line('Total Bad Rows: '||bad_rows);
END;
/

Step 4: Note down the indexes on original table for creating on new table
Step 5: Rename the Original Table
Step 6: Drop the indexes
Step 7: Rename the new table to original table name
Step 8: Recreate the indexes
Step 9: Recreate the synoyms & grants
Step 10: Compile invalid objects any depends on this table.
========================

Monday, July 25, 2011

Restore RMAN Incremental backup to new server

I will present in this blog how to restore 10gR2 Database incremental backup to a new server using RMAN. The method I used here will not use RMAN catalog database assuming that you have only database backup.

RMAN Backup Configuration
Daily :  Incremental Level 1 cumulative
Weekly:  Incremental Level 0
Retention Policy:   RECOVERY WINDOW OF 7 DAYS
Device: Disk
Control File Autobackup:  on
In this, assume that following directories are used to restoring DB
DB Name
ORCL1
u04/ORCL1/flash_recovery_area
Flash Recovery Area
/u01/ ORCL1/oracle/admin
ORACLE_BASE
/u04/ ORCL1/oradata
To store datafile
Pre-requisite for restore
1.       You must know DBID of the database to be restored.
2.       A valid backup is required.
3.       Sufficient Disk Space on the server to restore the backup.
4.       If restoring from tape, Tape Library must be configured to the server for restoring.
5.       Server OS/Architecture must be same as the backed up database server.
6.       All recommended patches and tuning must be done on the server as per Oracle documentation.
7.       Oracle Software of same version of backed up database must be installed.
8.       Oracle Software must be patched similar to backed up database.


Performing Complete Recovery
1.       Restore the backup: Once the server is ready with all pre-requisites, restore the backup to the specific directory/directories. Here it will be to the flash recovery area
2.       Verify the directory and file permissions: Oracle user and dba group must have full permission on the restored directory and files. Owner of the files must be oracle user.
3.       Parameter file:  Identify the backup piece from the Latest CONTROLFILE and SPFILE backups located in restored $Flash_Recovery_Area/autobackup/
4.       Parameter file:  Identify the backup piece from the Latest CONTROLFILE and SPFILE backups located in restored $Flash_Recovery_Area/autobackup/<latest date>. Filename usually starts with o1_mf_s_ if you are backing up spfile with controlfile autobackup, otherwise Filename look like o1_mf_n_
If the parameter file is not backed up as part of your backup strategy, you must create it manually.
5.       Add entry for LISTENER_ORCL1 in $ORACLE_HOME/network/admin/tnsnames.ora
6.       Restore SPFILE as follows. SPIFLE will be restored in $ORACLE_HOME/dbs  (If you don’t have spfile in autobackup you will have to create it manually)

$ export ORACLE_SID= ORCL1
$ rman target /
RMAN> startup nomount
RMAN> set DBID=<DBID of your database>
RMAN> restore spfile from "<$FLASH_RECOVERY_AREA>/autobackup/<latest directory>/o1_mf_s_751194987_6wyrfc7d_.bkp ";
7.       Create PFILE (initORCL1.ora) from SPFILE;
RMAN> sql 'create pfile from spfile';
8.       Modify the PFILE(initORCL1.ora) and make the required changes.
9.       Correct the following parameters as per new environment.
Control File = /new/path/controln1.dbf,/new/path/controln1.dbf
user_dump_dest = <ORACLE_BASE>/admin/<DBNAME>/udump
background_dump_dest = <ORACLE_BASE>/admin/<DBNAME>/bdump
core_dump_dest = <ORACLE_BASE>/admin/<DBNAME>/bdump
log_archive_dest_1 = 'location=/new/path'

10. Restore control file as follows
restore controlfile to '<path>' from "<restored/controlfile/autobackup/path>";
11.   Force Mount the Instance as follows:
RMAN> startup force mount pfile=$ORACLE_HOME/dbs/initORCL1.ora
12.   Create SPFILE from PFILE.
RMAN> sql 'create spfile from pfile'
13.   Shutdown the Database and Mount the database:
RMAN> shutdown
RMAN> startup mount;
14.    Disable Flashback Database as follows:
SQ> alter database flashback off;
15.   Disable Block Change Tracking as follows:
SQ> alter database disable block change tracking;
16.   Restore the Database as follows. Tail the alert log file and make sure database files are getting restored.  Since we are restoring datafiles to new location, files to be renamed(Yu can generate a script to do this). This needs to be executed as one command in run
$ export ORACLE_SID= ORCL1
$ rman target /
RMAN>
run{
set newname for datafile '<old/path/>orcl1_system01.dbf' to '/u04/ORCL1/oradata/orcl1_system01.dbf';
.
.
.
<Rename all your DBFs>
.
.
restore database;
switch datafile all;
recover database;
}


17.   Rename Log File:
If the restored log file path is different, then rename the restored log files in the control fileUse the below SQL to generate script to rename all data files and execute the generated script.
select 'alter database rename file '''|| member ||''' to '''|| replace(member,'<old/path/>','</new/path/>')||''';' from v$logfile
18.   Open Database: Open the Database by executing following command
SQL> Alter database open resetlogs;
19.   Create Temp Tablespace
SQL> CREATE TEMPORARY TABLESPACE temp2 TEMPFILE '/new/path/<db_name>_temp02.dbf' size 500m autoextend off ;
SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2;
SQL> DROP TABLESPACE temp;

20.    Configure & change Flash Recovery Area:

SQL> alter system set db_recovery_file_dest='</new/path> /flash_recovery_area' scope=both;

Tuesday, July 12, 2011

Processing XML files and converting to Oracle RDBMS tables.

Oracle Database has the capability to store XML files. Oracle 9i onwards, there is a a datatype called XMLTYPE. Internally, Oracle uses CLOB to store the XML data. In this blog i will demonstrate how to load XML files to XMLTYPE table in Oracle Database and how to use query to extract data in tabular form.

1. Create a table with XMLTYPE column

create table XML_FILES
(
XMLDATA    sys.XMLTYPE       NOT NULL,
  FILE_ID    NUMBER (10)   NOT NULL,
  FILE_NAME  VARCHAR2 (255)  NOT NULL)


2. Load the XML files using SQL Loader.
 a) Create a SQL Loader control file
  Open notepad and add the below entries and save file as load_xml.ctl

  load data
  infile 'xml_files.dat'
  into table XML_FILES append
  fields terminated by ','
  (
  xml_filler filler char
  ,xmldata lobfile(xml_filler) terminated by EOF
  ,file_id
  ,file_name
  )


 b) Create  data file xml_files.dat.
  This file will be used by SQL Loader control file to read the file, filename and file id. Optionally, You can use shell script or batch programs to generate .dat file by reading directory.    Structure of the file should look similar as shown below.

     XMLFILE_AA_11.xml,1,'XMLFILE_AA_11.xml'
  XMLFILE_AA_12.xml,2,'XMLFILE_AA_12.xml'
  XMLFILE_AA_13.xml,3,'XMLFILE_AA_13.xml'
  XMLFILE_AA_14.xml,4,'XMLFILE_AA_14.xml'
  XMLFILE_AA_15.xml,5,'XMLFILE_AA_15.xml'


  SAMPLE XML File content is shown below.

   <?xml version="1.0" encoding="UTF-8"?>
   <EMPDATA TIME_STAMP="2006-08-07T15:00:42"
    TRANSACTION_ID="2006-08-07T15:00:42" VERSION="1.0">
    <HEADER>
     <SENDER>
      <SENDER_NAME>XYZ Corp</SENDER_NAME>
     </SENDER>
     <RECEIVER>
      <RECEIVER_NAME>ABC Bank</RECEIVER_NAME>
     </RECEIVER>
    </HEADER>
    <BODY>
   <EMP>
         <EMPNUM>12345678</EMPNUM>
         <DEPT>07I7</DEPT>
         <LOCATION>22</LOCATION>
         <HIRE_DATE>2006-08-07 00:12:00.0</HIRE_DATE>
         <FIRST_NAME>JOHN</FIRST_NAME>
         <LAST_NAME>SMITH</LAST_NAME>
    <PHONES>
    <PHONE type="Office">5676739</PHONE>
    <PHONE type="Office">9123412432</PHONE>
    </PHONES>
     <AC_NO>12313-123123-1233</AC_NO>
     <AMOUNT>2500.00</AMOUNT>
   </EMP>
   <EMP>
         <EMPNUM>12345679</EMPNUM>
        <DEPT>07I2</DEPT>
        <LOCATION>22</LOCATION>
        <HIRE_DATE>2006-08-17 00:12:00.0</HIRE_DATE>
        <FIRST_NAME>GREG</FIRST_NAME>
        <LAST_NAME>WINTER</LAST_NAME>
     <PHONES>
    <PHONE type="Office">5676733</PHONE>
    <PHONE type="Office">9123434231</PHONE>
     </PHONES>
   <AC_NO>12313-123144_4354</AC_NO>
   <AMOUNT>2800.00</AMOUNT>
       </EMP>
   </BODY>
  </EMPDATA>


 c) Load the XML files to table using SQL Loader

  Execute the below command in the command prompt.

  sqlldr xmlstage@devl control=load_xml.ctl
  Once the command is executed successfully the XML files would have been loaded to the table. You can query the table to see the data loaded
  SQL>  select xmldata,file_id,file_name from xml_files;

  But the output will not show the XML file content entirely.

3. Querying the data in the XMLTYPE column

 a) Below query will extract emp_num, acct_num and amount from the XML
  SELECT EXTRACTVALUE (VALUE (ctba), '/EMP/EMPNUM') emp_num,
          EXTRACTVALUE (VALUE (ctba), '/EMP/AC_NO') acct_num,
          EXTRACTVALUE (VALUE (ctba), '/EMP/AMOUNT') amount
     FROM xmlstage.xml_files,
          TABLE (XMLSEQUENCE (EXTRACT (xmldata, '/EMPDATA/BODY/EMP'))) ctba


 b) Below Query will extract phone number and phone type. So it is little different as we have to extract the attribute of a node.  We have to specify attribute with "@"

  SELECT EXTRACTVALUE (VALUE (ctba), '/EMP/EMPNUM') emp_num,
          TO_TIMESTAMP (EXTRACTVALUE (VALUE (ctba), '/EMP/HIRE_DATE'),
                        'YYYY-MM-DD HH24:MI:SS.FF'
                       ) hire_date,
          EXTRACTVALUE (VALUE (ctbb),
'/PHONE/@type') phone_type,
         EXTRACTVALUE (VALUE (ctbb), '/PHONE') phone
    FROM xmlstage.xml_files,
         TABLE (XMLSEQUENCE (EXTRACT (xmldata, '/EMPDATA/BODY/EMP'))) ctba,
   TABLE (XMLSEQUENCE (EXTRACT (VALUE (ctba), '/EMP/PHONES/PHONE'))) ctbb

 c) If your XML contains xml namespace attribute, you should ad xmlns as second parameter in the EXTRACT function and as third parameter in EXTRACTVALUE function.


You can use the queries like above to extract data from XMLTYPE column and insert into Tables for further processing.

Thursday, July 7, 2011

Create Database Documentation using Oracle Grid Control

Oracle Grid control is one of the useful tools for DBAs and System administrators.
You can generate a documentation of your database using Oracle Grid Control with two click of mouse button.
You can acheive this using the report functionality in the grind control.
Grid Control management repository stores configuration and performance information of your database and provides many useful views.
The schema SYSMAN holds information of objects such as tables,tablespaces, views, triggers, packages, procedures, synonyms etc of target databases.
It also holds information of the host server including hardware and OS

Here i will share some of the SQLs used for generating report.

Grid control provides Dynamic Parameters of target and timeperiod. You can use this by including bind variable placeholders in your SQL statement.
For example; ??EMIP_BIND_TARGET_GUID?? - this variable will bind UID of the target for the report in your SQL statement.

1. To get Server Details
========================
Below SQL provides the hardare and OS details and present in a tabular format.


select
decode(d.rn,1,'Host Name', 2,'Domain',3,'Operating System',4,'System Arch',5,'Manufacturer',6,'Memory(MB)',7,'IP Address') as col1,
decode(d.rn,1,h.host_name,2,h.domain,3, h.os_summary, 4,h.system_config,5,h.ma, 6,h.MEM,7,p1.PROPERTY_VALUE) as col2
from MGMT$OS_HW_SUMMARY h , (select level as rn from dual connect by level <= 7) d, mgmt_targets t1, mgmt_target_properties p1
where h.TARGET_GUID=(select t.TARGET_GUID from MGMT$DB_DBNINSTANCEINFO_ALL i,MGMT$TARGET t
where t.target_name=i.host_name
and i.TARGET_GUID='2AE31523681AB4BD3F1BC404F83E979C')
and p1.PROPERTY_NAME='IP_address'
and p1.target_guid=t1.target_guid
and t1.TARGET_NAME=h.host_name
2. DB Instance Details
======================
select
decode(d.rn,1,'Database Name', 2,'Characterset',3,'Instance Name',4,'National Characterset',5,'Global Name',6,'Log Mode',7,'Version',8,'Default Temp Tblspce') as Property,
decode(d.rn,1,o.database_name, 2,o.characterset,3,o.instance_name,4,o.national_characterset,5,o.global_name,6,o.log_mode,7,o.banner,8,o.default_temp_tablespace) as Value
from mgmt$db_dbninstanceinfo o, (select level as rn from dual connect by level <= 8) d
where o.target_guid = ??EMIP_BIND_TARGET_GUID??
union
select 'DB Size(GB)', to_char(sum(FILE_SIZE)/1024/1024/1024) from MGMT$DB_DATAFILES
where TARGET_GUID=??EMIP_BIND_TARGET_GUID??

3. SGA Details
================
select
substr(sganame,1,length(sganame)-4) as Name,
decode(substr(sganame,length(sganame)-3,4),'(MB)',sgasize*1024,sgasize) as Value
from MGMT$DB_SGA
where TARGET_GUID=??EMIP_BIND_TARGET_GUID??
and sganame in ('Buffered Cache (MB)','Fixed SGA (KB)','Redo Buffers (KB)','Variable SGA (MB)')
4. Variable SGA details
=======================
select
substr(sganame,1,length(sganame)-4) as Name,
decode(substr(sganame,length(sganame)-3,4),'(MB)',sgasize*1024,sgasize) as Value
from MGMT$DB_SGA
where TARGET_GUID=??EMIP_BIND_TARGET_GUID??
and sganame like '%Pool%'

5. Non Default parameters
===========================
select Name,Value from MGMT$DB_INIT_PARAMS
where TARGET_GUID=??EMIP_BIND_TARGET_GUID??
and isdefault='FALSE'
6. Control File Details
=======================
select file_name as FileName,Status from MGMT$DB_CONTROLFILES
where TARGET_GUID=??EMIP_BIND_TARGET_GUID??


7. Intance Performace
======================
select column_label,value FROM SYSMAN.MGMT$METRIC_CURRENT
Where metric_name='instance_efficiency'
and target_guid=??EMIP_BIND_TARGET_GUID??
Above are some of the SQLs used to generate report. In addition to that you can get the details of the Redo Logs, Tablespaces, Datafiles, Invalid Objects, Patches applied, Users with power privileges etc.