Saturday, September 27, 2014

Workaround for PASSWORD EXPIRED or EXPIRE(GRACE)

Had a Challenge that wanted to keep Same Password to keep Application Running because even application team didn't know at how many places this password was used. When I checked the password it was in EXPIRE (GRACE) Status, 

SQL> select USERNAME,ACCOUNT_STATUS,LOCK_DATE,EXPIRY_DATE,PROFILE from dba_users where username='WYSE';


USERNAME                       ACCOUNT_STATUS                   LOCK_DATE EXPIRY_DA PROFILE

------------------------------ -------------------------------- --------- --------- ------------------------------
WYSE                           EXPIRED(GRACE)                                                 DEFAULT

While "ALTER USER WYSE PASSWORD EXPIRE" exists, the “PASSWORD UNEXPIRE” statement does not exist in Oracle.


The password expiration mechanism is a method to provide security. Using this mechanism, users are automatically prompted to change their password after a certain period, defined in the profile the user is assigned. This way, compromised passwords can only be used for a certain period, afterwards the password needs to be changed.


The method described here should only be used when there is no way that you can change passwords or need to keep an application running. (Here I had to keep same password because this password had been hard coded at application level)


Workaround


For this workaround, I used the “ALTER USER ... IDENTIFIED BY VALUES” statement. 


Log in as a user that was granted the DBA privilege. Verify that the account_status of the user is EXPIRED orEXPIRE(GRACE):


SQL> SELECT 'ALTER USER '|| name ||' IDENTIFIED BY VALUES '''|| spare4 ||';'|| password ||''';' FROM sys.user$ WHERE name='WYSE';


'ALTERUSER'||NAME||'IDENTIFIEDBYVALUES'''||SPARE4||';'||PASSWORD||''';'

-------------------------------------------------------------------------------------------------------------
ALTER USER WYSE IDENTIFIED BY VALUES 'S:AC4C23B91E11B3129D237FDF0EDB9166549E978408BC9795C5B3EA97ADF8;E6077E624E866DEA';

Now, use this dynamic SQL query to get the encrypted password for the user “WYSE” (from sys.user$) and construct the ALTER USER statement to reset the password:

SQL> ALTER USER WYSE IDENTIFIED BY VALUES 'S:AC4C23B91E11B3129D237FDF0EDB9166549E978408BC9795C5B3EA97ADF8;E6077E624E866DEA';

User altered.

This will change the account status back to OPEN and I should be able to log in using your old password:

SQL> select USERNAME,ACCOUNT_STATUS,LOCK_DATE,EXPIRY_DATE,PROFILE from dba_users where username='WYSE';

USERNAME                       ACCOUNT_STATUS                   LOCK_DATE EXPIRY_DA PROFILE
------------------------------ -------------------------------- --------- --------- --------------------------
WYSE                           OPEN                                                 DEFAULT

SQL> conn wyse
Enter password:
Connected.

SQL> show user
USER is "WYSE"

This I found Interesting and useful at quite sometime, hence Thought to Share with You - Manish

Tuesday, September 16, 2014

Oracle Exadata Patching

Recently, I've got a chance to perform Exadata Patching Activity on our X3-2 Quarter Exadata Box (2 Compute Nodes + 3 Cell Nodes) in co-ordination with Oracle, which consist of,

1) Upgrade Image of DB Servers from 11.2.3.2.1 to 11.2.3.3.1 in rolling fashion, which is most up-to-date at this moment.

2) Apply Bundle Patch 24 (JUL 2014 - 11.2.0.3.24) for QCPE & QDPE On RAC Oracle Homes

Patch description:  "QUARTERLY CRS PATCH FOR EXADATA (JUL 2014 - 11.2.0.3.24) (18906063)"
Patch description:  "QUARTERLY DATABASE PATCH FOR EXADATA (JUL 2014 - 11.2.0.3.24) : (18707883)"

3) Run Catbundle in above patch applied RAC Databases.

4) Upgrade Image of Cell (Storage) Servers from 11.2.3.2.1 to 11.2.3.3.1 in rolling fashion

5) Apply patch for Infiniband (IB) Switch

Obviously above these 5 steps took lot of planning & pre-requisites checks.
The reason, we went for Image Upgrade because we hit on following bug which is resolved in this 11.2.3.3.1 Image.

Our one of the disks was showing below status,

Issue:

CellCLI> LIST PHYSICALDISK WHERE DISKTYPE=harddisk
         20:0    R7K4ND  normal
         20:1    R7LPXD  normal
         20:2    R7P2ND  normal
         20:3    R7ESGD  normal
         20:4    R7H27D  warning - poor performance
         20:5    R7PK9D  normal
         20:6    R7GWJD  normal
         20:7    R7PL2D  normal
         20:8    R7DN1D  normal
         20:9    R7EASD  normal
         20:10   R748SD  normal
         20:11   R6X83D  normal

[root@inblrdrceladm01 ~]# cellcli -e list griddisk attributes name,status,asmmodestatus,asmdeactivationoutcome
         DATA_DR_CD_00_inblrdrceladm01   active                  ONLINE          Yes
         DATA_DR_CD_01_inblrdrceladm01   active                  ONLINE          Yes
         DATA_DR_CD_02_inblrdrceladm01   active                  ONLINE          Yes
         DATA_DR_CD_03_inblrdrceladm01   active                  DROPPED         Yes 
         DATA_DR_CD_04_inblrdrceladm01   proactive failure       DROPPED         Yes
         DATA_DR_CD_05_inblrdrceladm01   active                  ONLINE          Yes
         DBFS_DG_CD_02_inblrdrceladm01   active                  ONLINE          Yes
         DBFS_DG_CD_03_inblrdrceladm01   active                  DROPPED         Yes
         DBFS_DG_CD_04_inblrdrceladm01   proactive failure       DROPPED         Yes
         DBFS_DG_CD_05_inblrdrceladm01   active                  ONLINE          Yes
         RECO_DR_CD_00_inblrdrceladm01   active                  ONLINE          Yes
         RECO_DR_CD_01_inblrdrceladm01   active                  ONLINE          Yes
         RECO_DR_CD_02_inblrdrceladm01   active                  ONLINE          Yes
         RECO_DR_CD_03_inblrdrceladm01   active                  DROPPED         Yes
         RECO_DR_CD_04_inblrdrceladm01   proactive failure       DROPPED         Yes
         RECO_DR_CD_05_inblrdrceladm01   active                  ONLINE          Yes

CellCLI> list cell detail

         cellsrvStatus:          stopped
         msStatus:               running
         rsStatus:               running

cellsrvStatus was stopping automatically on 1st node due to this reason.

Cause, we found it after raising a SR:

We were hitting Bug:17021128 : NIWOT "CHIP PAUSED" CAUSES HIGH SERVICE TIME ON ALL DRIVES 

Storage Servers where LSI MegaRaid firmware is below 12.12.0-0178. This has been observed primarily on systems running Exadata Storage Software 11.2.3.2.0, 11.2.3.2.1 where LSI MegaRaid firmware is 12.1.2.0-0140. I have identified that you are on this version of firmware. 

Further evidence of this are the "Chip 0 Pause" messages in the MegaCli firmware log which I found during my investigation: 

06/26/14 8:34:06: [9e]= 1 [a0]= f [a2]= 9 ^M 
06/26/14 8:37:59: DM: Chip 0 Paused^M 
06/26/14 8:37:59: Chip <0> Slots: Cur=[133]^M 
06/26/14 8:37:59: [87]= 3 [89]= b [8b]= f [8e]= f [90]=14 ^M 
06/26/14 8:41:11: DM: Chip 0 Paused^M 
06/26/14 8:41:11: Chip <0> Slots: Cur=[69]^M 
06/26/14 8:41:11: [47]= 1 [4a]= c [4c]=17 [4e]= e [50]= e ^M 
06/26/14 8:41:16: DM: Chip 0 Paused^M 
06/26/14 8:41:16: Chip <0> Slots: Cur=[74]^M 
06/26/14 8:41:16: [4c]= 1 [4e]= e [50]= e ^M 
06/26/14 8:43:23: DM: Chip 0 Paused^M 
06/26/14 8:43:23: Chip <0> Slots: Cur=[201]^M 

To resolve this issue, the following is recommended: 

Install LSI MegaRaid firmware version 12.12.0-0178 which is included in 11.2.3.2.2 or 11.2.3.3.0

Solution: (Remember this is Rolling method, so no need to down any EBS, Siebel, Hyperion or any other Applications)

1) Upgrade Image of DB Servers from 11.2.3.2.1 to 11.2.3.3.1 in rolling fashion

./dbnodeupdate.sh -u -l /u01/patches/YUM/p18876946_112331_Linux-x86-64.zip -v

./dbnodeupdate.sh -u -l /u01/patches/YUM/p18876946_112331_Linux-x86-64.zip -b (for backup)

./dbnodeupdate.sh -u -l /u01/patches/YUM/p18876946_112331_Linux-x86-64.zip -n (for execution)

./dbnodeupdate.sh -c (after patching and node reboot)

2) Apply Bundle Patch 24 (JUL 2014 - 11.2.0.3.24) On RAC Oracle Homes (manual method)

On ASM/Grid Home first, followed by on rest RDBMS RAC Homes. (We have 11 RDBMS Homes for different Applications)

Stop agents (if running)

Check & Rollback for conflicting patches, if any

Run preprepatch script

Apply QUARTERLY CRS PATCH FOR EXADATA (JUL 2014 - 11.2.0.3.24) (18906063) & QUARTERLY DATABASE PATCH FOR EXADATA (JUL 2014 - 11.2.0.3.24) : (18707883 - this require only on DB Home and not on Grid Home) 

Run Postpatch Script

Apply conflicting patch

start CRS with rootcrs.pl -patch

crsctl check crs

Start Both EM Agents

crsctl stat res -t 

3) Run Catbundle in above patch applied RAC Databases.

su - oracle
. oraenv 
export ORACLE_SID=<instance_name>
cd $ORACLE_HOME
sqlplus "/ as sysdba"

Check for invalid objects
======================================
column comp_name format a40
column version format a12
column status format a15
select comp_name,version,status from dba_registry;
---------------------------------------------------
column owner format a15
column object_name format a40
column object_type format a20
select owner, object_name, object_type from dba_objects where status='INVALID' order by object_type,owner,object_name;

If there are a lot of invalids, this next command will list only the invalids containing SYS in the owner
-----------------------------------------------------------------------------------------------------------
select owner, object_name, object_type from dba_objects where status='INVALID' and owner like '%SYS%' order by object_type,owner,object_name;

---------------------------------------------------
@?/rdbms/admin/utlprp.sql 16
select comp_name,version,status from dba_registry;

------------------------------------------------
select capture_name from dba_capture where capture_name not like 'OGG$%';

select apply_name from dba_apply where apply_name not like  'OGG$%';

-------------------------------------------------------
select capture_name from dba_capture where capture_name not like 'OGG$%';
exec dbms_capture_adm.stop_capture('capture_name');

select apply_name from dba_apply where apply_name not like  'OGG$%';
exec dbms_apply_adm.stop_apply('apply_name');
-------------------------------------------------------------
@?/rdbms/admin/catbundle.sql exa apply

@?/rdbms/admin/utlprp.sql 16

-------------------------------------------------------------
set lines 200
column owner format a15
column object_name format a40
column object_type format a20
col comp_name for a60
select comp_name,version,status from dba_registry;

select owner, object_name, object_type from dba_objects where status='INVALID' order by object_type,owner,object_name;

If there are a lot of invalids, this next command will list only the invalids containing SYS in the owner

select owner, object_name, object_type from dba_objects where status='INVALID' and owner like '%SYS%' order by object_type,owner,object_name;

-----------------------------------------------------------
select capture_name from dba_capture where capture_name not like 'OGG$%';
exec dbms_capture_adm.start_capture('capture_name');

select apply_name from dba_apply where apply_name not like  'OGG$%';
exec dbms_apply_adm.start_apply('apply_name');

--------------------------------------------------------------
Check that the apply finished successfully

set lines 200
col ACTION_TIME for a40
col COMMENTS for a40
select * from dba_registry_history;

4) Upgrade Image of Cell (Storage) Servers from 11.2.3.2.1 to 11.2.3.3.1 in rolling fashion

cd /u01/patches/CELL/patch_11.2.3.3.1.140708
#./patchmgr -cells cell_group -patch_check_prereq -rolling

The output should be cleaned w.r.t above command for each Cell Node.

Check repair times for all mounted disk groups in the Oracle ASM instance and adjust if needed
========================================================================
su - oracle
. oraenv <<EOF 
+ASM1
EOF
sqlplus / as sysasm
select dg.name,a.value from v$asm_diskgroup dg, v$asm_attribute a where dg.group_number=a.group_number and a.name='disk_repair_time';

If the repair time is not 3.6 hours then note the value and the diskgroup names. Replace <diskgroup_name> in the following statement to adjust.

alter diskgroup '<diskgroup_name>' set attribute 'disk_repair_time'='3.6h';  ### Set it to Higher Side

Repeat the above statement for each diskgroup

2) Increase ASM Power with Limit asm_power_limit parameter
3) Check no V$asm_operation is currently going before starting Cell Patching Activity

Cell Patching in Rolling Upgrade  (Initiate from DB Node, root user)
========================================================================

[[oracle@inblrdrdbadm01 patch_11.2.3.3.1.140708]$ pwd
/u01/patches/CELL/patch_11.2.3.3.1.140708

[oracle@inblrdrdbadm01 patch_11.2.3.3.1.140708]$ cat cell_group
inblrdrceladm01
inblrdrceladm02
inblrdrceladm03

[oracle@inblrdrdbadm01 patch_11.2.3.3.1.140708]$ cat dbs_group
inblrdrdbadm01
inblrdrdbadm02

[oracle@inblrdrdbadm01 patch_11.2.3.3.1.140708]$ cat inblrdrceladm01
inblrdrceladm01

[oracle@inblrdrdbadm01 patch_11.2.3.3.1.140708]$ cat inblrdrceladm02
inblrdrceladm02

[oracle@inblrdrdbadm01 patch_11.2.3.3.1.140708]$ cat inblrdrceladm03
inblrdrceladm03

Cleanup space from any previous runs
==================================================
the -reset_force command is only done the first time the cells are patched to this release. 
It is not necessary to use the command for subsequent cell patching, even after rolling back the patch.

#./patchmgr -cells cell_group -reset_force   (cell_group consist of cell servers hostname, or you can give single hostname file name)

OR

#./patchmgr -cells inblrdrceladm01 -reset_force  (Same way inblrdrceladm02 /inblrdrceladm03 file)

-------------------------------------------------------------------------------------------------
Always use the -cleanup option before retrying a failed or halted run of the patchmgr utility.

#./patchmgr -cells cell_group -cleanup

OR

#./patchmgr -cells inblrdrceladm01 -cleanup    (Same way inblrdrceladm02 /inblrdrceladm03 file)

Run prerequisites check   (The output Should be Clean)
=================================================================
cd /u01/patches/CELL/patch_11.2.3.3.1.140708
#./patchmgr -cells cell_group -patch_check_prereq -rolling

OR

#./patchmgr -cells inblrdrceladm01 -patch_check_prereq -rolling    (Same way inblrdrceladm02 /inblrdrceladm03 file)

Patch the cell nodes (in rolling upgrade)
===========================================

# nohup ./patchmgr -cells  inblrdrceladm01 -patch -rolling &  [Same way for inblrdrceladm02 /inblrdrceladm03 file, only after checking #cellcli -e list griddisk attributes name,status,asmmodestatus,asmdeactivationoutcome should be ONLINE (Not Resyncing) on Cell Node]

SUCCESS: DONE: Execute plugin check for Patch Check Prereq.
1 of 5 :Working: DO: Initiate patch on cells. Cells will remain up. Up to 5 minutes ...
2 of 5 :Working: DO: Waiting to finish pre-reboot patch actions. Cells will remain up. Up to 45 minutes
3-5 of 5 :Working: DO: Finalize patch and check final status on cells. Cells will reboot.

Monitor the patch progress
===================================
cd /u01/patches/CELL/patch_11.2.3.3.1.140708
tail -f nohup.out

Cleanup space
==================
#./patchmgr -cells cell_group -cleanup

OR

#./patchmgr -cells inblrdrceladm01 -cleanup    (Same way inblrdrceladm02 /inblrdrceladm03 file)

Post Checks
=================
#imageinfo -version       
#imageinfo -status       
#uname -r     
#imagehistory
#uptime

#dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e list griddisk attributes name,status,asmmodestatus,asmdeactivationoutcome|more

(The next 5 lines are all one command and should not return any output. If output is returned then disks are still resyncing.)

dcli -g cell_group -l root \
"cat /root/attempted_deactivated_by_patch_griddisks.txt | grep -v \
ACTIVATE | while read line; do str=\`cellcli -e list griddisk where \
name = \$line attributes name, status, asmmodestatus\`; echo \$str | \
grep -v \"active ONLINE\"; done" 

so Don't Start another Cell Patching in case if you see Disks are Resyncing, All Disks should be ONLINE)

Change disk_repair_time back to original value 
==================================================================
su - oracle
. oraenv <<EOF 
+ASM1
EOF
sqlplus / as sysasm
select dg.name,a.value from v$asm_diskgroup dg, v$asm_attribute a where dg.group_number=a.group_number and a.name='disk_repair_time';
If the repair time is not 3.6 hours then note the value and the diskgroup names. Replace <diskgroup_name> in the following statement to adjust.
alter diskgroup '<diskgroup_name>' set attribute 'disk_repair_time'='<original value>';
Repeat the above statement for each diskgroup
exit

5) Apply patch for Infiniband (IB) Switch

cd /u01/patches/CELL/patch_11.2.3.3.1.140708

vi ibswitches.lst

One switch per line. Spine switch listed first as below:
inblrdrsw-iba0
inblrdrsw-ibb0

./patchmgr -ibswitches ibswitches.lst -upgrade -ibswitch_precheck (Pre-requisites Check)

./patchmgr -ibswitches ibswitches.lst -upgrade (Actual Upgrade)


The output should show SUCCESS . If there are errors, then correct the errors and run the upgrade command again.

############################ Post Activities and Checks #############################

CellCLI> ALTER PHYSICALDISK 20:4 reenable force; 
Physical disk 20:4 was reenabled. 

CellCLI> LIST PHYSICALDISK WHERE DISKTYPE=harddisk 
20:0 R7K4ND normal 
20:1 R7LPXD normal 
20:2 R7P2ND normal 
20:3 R7ESGD normal 
20:4 R7H27D normal    --- issue resolved
20:5 R7PK9D normal 
20:6 R7GWJD normal 
20:7 R7PL2D normal 
20:8 R7DN1D normal 
20:9 R7EASD normal 
20:10 R748SD normal 
20:11 R6X83D normal 

Run the below command on ASM1 

SQL> alter diskgroup DATA_DR add disk 'o/192.168.10.3/DATA_DR_CD_04_inblrdrceladm01' force,'o/192.168.10.3/DATA_DR_CD_03_inblrdrceladm01' force; 

Run the below command on ASM2 

SQL> alter diskgroup DBFS_DG add disk 'o/192.168.10.3/DBFS_DG_CD_04_inblrdrceladm01' force,'o/192.168.10.3/DBFS_DG_CD_03_inblrdrceladm01' force; 
SQL> alter diskgroup RECO_DR add disk 'o/192.168.10.3/RECO_DR_CD_03_inblrdrceladm01' force,'o/192.168.10.3/RECO_DR_CD_04_inblrdrceladm01' force; 

[oracle@inblrdrdbadm01 ~]$ . ASM.env
[oracle@inblrdrdbadm01 ~]$
[oracle@inblrdrdbadm01 ~]$ sqlplus "/as sysasm"

SQL> alter diskgroup DATA_DR add disk 'o/192.168.10.3/DATA_DR_CD_04_inblrdrceladm01' force,'o/192.168.10.3/DATA_DR_CD_03_inblrdrceladm01' force;

Diskgroup altered.

SQL> exit

[oracle@inblrdrdbadm01 ~]$ ssh inblrdrdbadm02
Last login: Thu Aug 14 11:13:52 2014 from inblrdrdbadm01.tajhotels.com
[oracle@inblrdrdbadm02 ~]$ . ASM.env
[oracle@inblrdrdbadm02 ~]$ sqlplus "/as sysasm"

SQL*Plus: Release 11.2.0.3.0 Production on Thu Aug 14 19:24:13 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> alter diskgroup DBFS_DG add disk 'o/192.168.10.3/DBFS_DG_CD_04_inblrdrceladm01' force,'o/192.168.10.3/DBFS_DG_CD_03_inblrdrceladm01' force;

Diskgroup altered.

SQL> alter diskgroup RECO_DR add disk 'o/192.168.10.3/RECO_DR_CD_03_inblrdrceladm01' force,'o/192.168.10.3/RECO_DR_CD_04_inblrdrceladm01' force;

Diskgroup altered.

[root@inblrdrceladm01 ~]# cellcli -e list griddisk attributes name,status,asmmodestatus,asmdeactivationoutcome
         DATA_DR_CD_00_inblrdrceladm01   active  ONLINE  Yes
         DATA_DR_CD_01_inblrdrceladm01   active  ONLINE  Yes
         DATA_DR_CD_02_inblrdrceladm01   active  ONLINE  Yes
         DATA_DR_CD_03_inblrdrceladm01   active  ONLINE  Yes
         DATA_DR_CD_04_inblrdrceladm01   active  ONLINE  Yes
         DATA_DR_CD_05_inblrdrceladm01   active  ONLINE  Yes
         DBFS_DG_CD_02_inblrdrceladm01   active  ONLINE  Yes
         DBFS_DG_CD_03_inblrdrceladm01   active  ONLINE  Yes
         DBFS_DG_CD_04_inblrdrceladm01   active  ONLINE  Yes
         DBFS_DG_CD_05_inblrdrceladm01   active  ONLINE  Yes
         RECO_DR_CD_00_inblrdrceladm01   active  ONLINE  Yes
         RECO_DR_CD_01_inblrdrceladm01   active  ONLINE  Yes
         RECO_DR_CD_02_inblrdrceladm01   active  ONLINE  Yes
         RECO_DR_CD_03_inblrdrceladm01   active  ONLINE  Yes
         RECO_DR_CD_04_inblrdrceladm01   active  ONLINE  Yes
         RECO_DR_CD_05_inblrdrceladm01   active  ONLINE  Yes

# imageinfo

Kernel version: 2.6.39-400.128.17.el5uek #1 SMP Tue May 27 13:20:24 PDT 2014 x86_64
Cell version: OSS_11.2.3.3.1_LINUX.X64_140708
Cell rpm version: cell-11.2.3.3.1_LINUX.X64_140708-1

Active image version: 11.2.3.3.1.140708
Active image activated: 2014-08-14 13:03:50 +0530
Active image status: success
Active system partition on device: /dev/md6
Active software partition on device: /dev/md8

# imagehistory
Version                              : 11.2.3.2.1.130109
Image activation date                : 2013-09-24 13:49:36 +0530
Imaging mode                         : fresh
Imaging status                       : success

Version                              : 11.2.3.3.1.140708
Image activation date                : 2014-08-14 13:03:50 +0530
Imaging mode                         : out of partition upgrade
Imaging status                       : success


Finally, This Marathon Activity got Successfully Completed in approx. 18.5 Hours due to Rolling Fashion & had 11 RDBMS Home for BP24 Patching followed by catbundle to run OR Else in Non-Rolling Fashion it would have taken 1/3rd i.e. 6 Hours approx. but again at the cost of downtime, which wasn't possible. :)

Thanks & Have a Happy Reading - Manish

Wednesday, September 10, 2014

Restoring RMAN Incremental Backup for Standby with Added datafile on Primary

Pre-requisites: Already had RMAN Incremental backup from required SCN from Primary Database and transferred backup pieces from Primary Server to Standby Server.

I've had situation where there was huge gap between Production & Standby and I didn't have any other option but to take RMAN Incremental backup from Primary and apply it to Standby to minimize the Gap.

Here I had challenge is that On Primary database, one datafile was added due to space crunch and as I knew RMAN Incremental backup doesn't support it, also some of the datafiles are in OMF & Non-OMF Format.

Below are the steps given from Restoration of RMAN Incremental Backup and issues/solutions faced/used. I hope this would be useful in some cases,

[inmumdcdbadm01.tajhotels.com -> oracle:/dbfsmnt/oradbfs/backup/Incr_EBSPRD] :rman target /

Recovery Manager: Release 11.2.0.3.0 - Production on Sat Sep 6 02:48:32 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup nomount

Oracle instance started

Total System Global Area   14965264384 bytes

Fixed Size                     2242944 bytes
Variable Size               3523217024 bytes
Database Buffers           11408506880 bytes
Redo Buffers                  31297536 bytes

RMAN> restore standby controlfile from '/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ctrl_EBSPRD.f';

Starting restore at 06-SEP-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=2506 instance=EBSPRD1 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
output file name=+DATA_DC/ebsprd/controlfile/current.1007.841346665
output file name=+RECO_DC/ebsprd/controlfile/current.3235.841346665
output file name=+DATA_DC/ebsprd/controlfile/current.1008.841346667
Finished restore at 06-SEP-14

RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

RMAN> crosscheck backup;

using channel ORA_DISK_1
crosschecked backup piece: found to be 'EXPIRED'
...

...

...

crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/dbfsmnt/oradbfs/stdby/ForStandby_8dpbp6ls_1_1 RECID=2159 STAMP=851227782
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t0pho2gl_1_1 RECID=2160 STAMP=857475609
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_svpho2gl_1_1 RECID=2161 STAMP=857475607
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t1pho2gn_1_1 RECID=2162 STAMP=857475609
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t6pho2it_1_1 RECID=2163 STAMP=857475704
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t4pho2hp_1_1 RECID=2164 STAMP=857475676
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t2pho2gn_1_1 RECID=2165 STAMP=857475628
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t3pho2h2_1_1 RECID=2166 STAMP=857475657
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t5pho2if_1_1 RECID=2167 STAMP=857475700
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t7phocea_1_1 RECID=2168 STAMP=857485771
Crosschecked 18 objects


RMAN> delete expired  backup;

Deleted 9 EXPIRED objects


RMAN> crosscheck backup;

using channel ORA_DISK_1
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t0pho2gl_1_1 RECID=2160 STAMP=857475609
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_svpho2gl_1_1 RECID=2161 STAMP=857475607
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t1pho2gn_1_1 RECID=2162 STAMP=857475609
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t6pho2it_1_1 RECID=2163 STAMP=857475704
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t4pho2hp_1_1 RECID=2164 STAMP=857475676
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t2pho2gn_1_1 RECID=2165 STAMP=857475628
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t3pho2h2_1_1 RECID=2166 STAMP=857475657
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t5pho2if_1_1 RECID=2167 STAMP=857475700
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t7phocea_1_1 RECID=2168 STAMP=857485771
Crosschecked 9 objects


RMAN> catalog start with '+DATA_DC/EBSPRD/DATAFILE';

searching for all files that match the pattern +DATA_DC/EBSPRD/DATAFILE

List of Files Unknown to the Database
=====================================
File Name: +data_dc/EBSPRD/DATAFILE/APPS_TS_QUEUES.1080.841608639
File Name: +data_dc/EBSPRD/DATAFILE/TAJD.1081.841608679
File Name: +data_dc/EBSPRD/DATAFILE/APPS_TS_TX_IDX.1082.841608711
File Name: +data_dc/EBSPRD/DATAFILE/APPS_TS_TX_DATA.1083.841608743
File Name: +data_dc/EBSPRD/DATAFILE/SYSTEM.1084.841608999
File Name: +data_dc/EBSPRD/DATAFILE/APPS_UNDOTS2.1085.841717505
File Name: +data_dc/EBSPRD/DATAFILE/APPS_UNDOTS2.1117.847381215
File Name: +data_dc/EBSPRD/DATAFILE/APPS_TS_TX_DATA.1116.855958105

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: +data_dc/EBSPRD/DATAFILE/APPS_TS_QUEUES.1080.841608639
File Name: +data_dc/EBSPRD/DATAFILE/TAJD.1081.841608679
File Name: +data_dc/EBSPRD/DATAFILE/APPS_TS_TX_IDX.1082.841608711
File Name: +data_dc/EBSPRD/DATAFILE/APPS_TS_TX_DATA.1083.841608743
File Name: +data_dc/EBSPRD/DATAFILE/SYSTEM.1084.841608999
File Name: +data_dc/EBSPRD/DATAFILE/APPS_UNDOTS2.1085.841717505
File Name: +data_dc/EBSPRD/DATAFILE/APPS_UNDOTS2.1117.847381215
File Name: +data_dc/EBSPRD/DATAFILE/APPS_TS_TX_DATA.1116.855958105

RMAN> switch datafile 52 to copy;

datafile 52 switched to datafile copy "+DATA_DC/ebsprd/datafile/apps_ts_queues.1080.841608639"

RMAN> switch datafile 53 to copy;

datafile 53 switched to datafile copy "+DATA_DC/ebsprd/datafile/tajd.1081.841608679"

RMAN> switch datafile 54 to copy;

datafile 54 switched to datafile copy "+DATA_DC/ebsprd/datafile/apps_ts_tx_idx.1082.841608711"

RMAN> switch datafile 55 to copy;

datafile 55 switched to datafile copy "+DATA_DC/ebsprd/datafile/apps_ts_tx_data.1083.841608743"

RMAN> switch datafile 56 to copy;

datafile 56 switched to datafile copy "+DATA_DC/ebsprd/datafile/system.1084.841608999"

RMAN> switch datafile 57 to copy;

datafile 57 switched to datafile copy "+DATA_DC/ebsprd/datafile/apps_undots2.1085.841717505"

RMAN> switch datafile 58 to copy;

datafile 58 switched to datafile copy "+DATA_DC/ebsprd/datafile/apps_undots2.1117.847381215"

RMAN> switch datafile 59 to copy;

datafile 59 switched to datafile copy "+DATA_DC/ebsprd/datafile/apps_ts_tx_data.1116.855958105"

RMAN> switch datafile 60 to copy;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of switch to copy command at 09/06/2014 02:55:10
RMAN-06571: datafile 60 does not have recoverable copy

RMAN>

RMAN> recover database noredo;

Starting recover at 06-SEP-14
using channel ORA_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 09/06/2014 02:58:01
RMAN-06094: datafile 60 must be restored

RMAN> exit

################ From Primary Database Output###############

SQL> select count(1) from v$datafile;

  COUNT(1)
----------
        60

SQL> select file#,name,creation_time,bytes/1024/1024/1024 from  v$Datafile where file#=60;

     FILE# NAME                                                         CREATION_ BYTES/1024/1024/1024
---------- ------------------------------------------------------------ --------- --------------------
        60 +DATA_DR/ebsprddr/datafile/apps_ts_tx_data.1647.857404571    04-SEP-14                   20



Confirmed that Datafile# 60 existin on Production Database.

################ From Standby Database Output###############


SQL> select count(1) from v$datafile;

  COUNT(1)
----------
        60

SQL> select file#,name,creation_time,bytes/1024/1024/1024 from  v$Datafile where file#=60;

     FILE# NAME                                                         CREATION_ BYTES/1024/1024/1024
---------- ------------------------------------------------------------ --------- --------------------
        60 +DATA_DC/ebsprddr/datafile/apps_ts_tx_data.1647.857404571    04-SEP-14                   20

Showing false information as, newly created standby controlfile has just restored, but physically file doesn't exist in the ASM Diskgroup and path is too showing 

wrong.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+DATA_DC/ebsprd/system01.dbf
+DATA_DC/ebsprd/system02.dbf
+DATA_DC/ebsprd/system03.dbf
+DATA_DC/ebsprd/system04.dbf
+DATA_DC/ebsprd/system05.dbf
+DATA_DC/ebsprd/system06.dbf
+DATA_DC/ebsprd/system07.dbf
+DATA_DC/ebsprd/system08.dbf
+DATA_DC/ebsprd/system09.dbf
+DATA_DC/ebsprd/system10.dbf
+DATA_DC/ebsprd/system11.dbf
+DATA_DC/ebsprd/sysaux01.dbf
+DATA_DC/ebsprd/undo01.dbf
+DATA_DC/ebsprd/undo02.dbf
+DATA_DC/ebsprd/a_archive01.dbf
+DATA_DC/ebsprd/a_int01.dbf
+DATA_DC/ebsprd/a_media01.dbf
+DATA_DC/ebsprd/a_nolog01.dbf
+DATA_DC/ebsprd/a_queue01.dbf
+DATA_DC/ebsprd/a_queue02.dbf
+DATA_DC/ebsprd/a_queue03.dbf
+DATA_DC/ebsprd/a_ref01.dbf
+DATA_DC/ebsprd/a_ref02.dbf
+DATA_DC/ebsprd/a_summ01.dbf
+DATA_DC/ebsprd/a_summ02.dbf
+DATA_DC/ebsprd/apps_ts_tools01.dbf
+DATA_DC/ebsprd/a_txn_data01.dbf
+DATA_DC/ebsprd/a_txn_data02.dbf
+DATA_DC/ebsprd/a_txn_data03.dbf
+DATA_DC/ebsprd/a_txn_data05.dbf
+DATA_DC/ebsprd/a_txn_data06.dbf
+DATA_DC/ebsprd/a_txn_data07.dbf
+DATA_DC/ebsprd/a_txn_data08.dbf
+DATA_DC/ebsprd/a_txn_data09.dbf
+DATA_DC/ebsprd/a_txn_data10.dbf
+DATA_DC/ebsprd/a_txn_data4.dbf
+DATA_DC/ebsprd/a_txn_ind01.dbf
+DATA_DC/ebsprd/a_txn_ind02.dbf
+DATA_DC/ebsprd/a_txn_ind03.dbf
+DATA_DC/ebsprd/a_txn_ind04.dbf
+DATA_DC/ebsprd/a_txn_ind05.dbf
+DATA_DC/ebsprd/a_txn_ind06.dbf
+DATA_DC/ebsprd/a_txn_ind07.dbf
+DATA_DC/ebsprd/ctxd01.dbf
+DATA_DC/ebsprd/discoverer01.dbf
+DATA_DC/ebsprd/odm.dbf
+DATA_DC/ebsprd/olap.dbf
+DATA_DC/ebsprd/owad01.dbf
+DATA_DC/ebsprd/portal01.dbf
+DATA_DC/ebsprd/tajd01.dbf
+DATA_DC/ebsprd/tajx01.dbf
+DATA_DC/ebsprd/datafile/apps_ts_queues.1080.841608639      ### Datafile 52, OMF which I've just cataloged.
+DATA_DC/ebsprd/datafile/tajd.1081.841608679                ### Datafile 53, OMF which I've just cataloged.
+DATA_DC/ebsprd/datafile/apps_ts_tx_idx.1082.841608711      ### Datafile 54, OMF which I've just cataloged.
+DATA_DC/ebsprd/datafile/apps_ts_tx_data.1083.841608743     ### Datafile 55, OMF which I've just cataloged.
+DATA_DC/ebsprd/datafile/system.1084.841608999              ### Datafile 56, OMF which I've just cataloged.
+DATA_DC/ebsprd/datafile/apps_undots2.1085.841717505        ### Datafile 57, OMF which I've just cataloged.
+DATA_DC/ebsprd/datafile/apps_undots2.1117.847381215        ### Datafile 58, OMF which I've just cataloged.
+DATA_DC/ebsprd/datafile/apps_ts_tx_data.1116.855958105     ### Datafile 59, OMF which I've just cataloged.
+DATA_DC/ebsprddr/datafile/apps_ts_tx_data.1647.857404571   ### Datafile 60, Doesn't exist, also showing wrong path.

60 rows selected.



SQL> ALTER DATABASE CREATE DATAFILE '+DATA_DC/ebsprddr/datafile/apps_ts_tx_data.1647.857404571' AS '+DATA_DC';
ALTER DATABASE CREATE DATAFILE '+DATA_DC/ebsprddr/datafile/apps_ts_tx_data.1647.857404571' AS '+DATA_DC'
*
ERROR at line 1:
ORA-01275: Operation CREATE DATAFILE is not allowed if standby file management
is automatic.


SQL> SHOW PARAMETER STANDBY

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_archive_dest                 string      ?/dbs/arch
standby_file_management              string      AUTO
SQL> ALTER SYSTEM SET standby_file_management='MANUAL' SID='*' SCOPE=BOTH;

System altered.

SQL> SHOW PARAMETER STANDBY

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_archive_dest                 string      ?/dbs/arch
standby_file_management              string      MANUAL
SQL> ALTER DATABASE CREATE DATAFILE '+DATA_DC/ebsprddr/datafile/apps_ts_tx_data.1647.857404571' AS '+DATA_DC';
ALTER DATABASE CREATE DATAFILE '+DATA_DC/ebsprddr/datafile/apps_ts_tx_data.1647.857404571' AS '+DATA_DC'
*
ERROR at line 1:
ORA-01136: specified size of file 60 (12800 blocks) is less than original size
of 1310720 blocks
ORA-01110: data file 60:
'+DATA_DC/ebsprddr/datafile/apps_ts_tx_data.1647.857404571'


SQL> ALTER DATABASE CREATE DATAFILE '+DATA_DC/ebsprddr/datafile/apps_ts_tx_data.1647.857404571' AS '+DATA_DC' SIZE 20G;

Database altered.

SQL> ALTER SYSTEM SET standby_file_management='AUTO' SID='*' SCOPE=BOTH;

System altered.

SQL> SHOW PARAMETER STANDBY

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_archive_dest                 string      ?/dbs/arch
standby_file_management              string      AUTO

SQL> SELECT NAME FROM V$DATAFILE where FILE#=60;

NAME
--------------------------------------------------------------------------------
+DATA_DC/ebsprd/datafile/apps_ts_tx_data.681.857531541      #### (Here blank datafile actually got created in proper location with different OMF Name)


[inmumdcdbadm01.tajhotels.com -> oracle:/dbfsmnt/oradbfs] :rman target /

Recovery Manager: Release 11.2.0.3.0 - Production on Sat Sep 6 03:13:14 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: EBSPRD (DBID=642087017, not open)

RMAN> run {
2> allocate channel c1 type disk;
3> allocate channel c2 type disk;
4> allocate channel c3 type disk;
5> allocate channel c4 type disk;
6> allocate channel c5 type disk;
7> allocate channel c6 type disk;
8> allocate channel c7 type disk;
9> allocate channel c8 type disk;
10> recover database noredo;
11> release channel c1;
12> release channel c2;
13> release channel c3;
14> release channel c4;
15> release channel c5;
16> release channel c6;
17> release channel c7;
18> release channel c8;
}19>

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: SID=6888 instance=EBSPRD1 device type=DISK

allocated channel: c2
channel c2: SID=7514 instance=EBSPRD1 device type=DISK

allocated channel: c3
channel c3: SID=7827 instance=EBSPRD1 device type=DISK

allocated channel: c4
channel c4: SID=8140 instance=EBSPRD1 device type=DISK

allocated channel: c5
channel c5: SID=9079 instance=EBSPRD1 device type=DISK

allocated channel: c6
channel c6: SID=9392 instance=EBSPRD1 device type=DISK

allocated channel: c7
channel c7: SID=942 instance=EBSPRD1 device type=DISK

allocated channel: c8
channel c8: SID=2192 instance=EBSPRD1 device type=DISK

Starting recover at 06-SEP-14
channel c1: starting incremental datafile backup set restore
channel c1: specifying datafile(s) to restore from backup set
...
...
...
...
...
channel c8: reading from backup piece /dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t5pho2if_1_1
channel c1: piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t0pho2gl_1_1 tag=FOR_STANDBY
channel c1: restored backup piece 1
channel c1: restore complete, elapsed time: 00:04:27
channel c2: piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_svpho2gl_1_1 tag=FOR_STANDBY
channel c2: restored backup piece 1
channel c2: restore complete, elapsed time: 00:05:07
channel c3: piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t1pho2gn_1_1 tag=FOR_STANDBY
channel c3: restored backup piece 1
channel c3: restore complete, elapsed time: 00:05:37
channel c4: piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t6pho2it_1_1 tag=FOR_STANDBY
channel c4: restored backup piece 1
channel c4: restore complete, elapsed time: 00:06:07
channel c5: piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t4pho2hp_1_1 tag=FOR_STANDBY
channel c5: restored backup piece 1
channel c5: restore complete, elapsed time: 00:15:17
channel c6: piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t2pho2gn_1_1 tag=FOR_STANDBY
channel c6: restored backup piece 1
channel c6: restore complete, elapsed time: 00:16:27
channel c7: piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t3pho2h2_1_1 tag=FOR_STANDBY
channel c7: restored backup piece 1
channel c7: restore complete, elapsed time: 00:22:37
channel c8: piece handle=/dbfsmnt/oradbfs/backup/Incr_EBSPRD/ForStandby_t5pho2if_1_1 tag=FOR_STANDBY
channel c8: restored backup piece 1
channel c8: restore complete, elapsed time: 00:31:54

Finished recover at 06-SEP-14

released channel: c1

released channel: c2

released channel: c3

released channel: c4

released channel: c5

released channel: c6

released channel: c7

released channel: c8

RMAN> exit

Recovery Manager complete.


[inmumdcdbadm01.tajhotels.com -> oracle:/dbfsmnt/oradbfs] :sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Sat Sep 6 03:53:21 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> alter database recover managed standby database using current logfile disconnect parallel 8;

Database altered.


SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;

   THREAD# MAX(SEQUENCE#)
---------- --------------
         1          23306
         2          22623

SQL> select PROCESS,STATUS,THREAD#,SEQUENCE#,BLOCK# from v$managed_standby order by 3;

PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCK#
--------- ------------ ---------- ---------- ----------
ARCH      CONNECTED             0          0          0
RFS       IDLE                  0          0          0
RFS       IDLE                  0          0          0
RFS       IDLE                  0          0          0
RFS       IDLE                  0          0          0
RFS       IDLE                  0          0          0
ARCH      CONNECTED             0          0          0
ARCH      CONNECTED             0          0          0
ARCH      CONNECTED             0          0          0
ARCH      CONNECTED             0          0          0
ARCH      CONNECTED             0          0          0
RFS       IDLE                  0          0          0
RFS       IDLE                  0          0          0
RFS       IDLE                  0          0          0
RFS       IDLE                  0          0          0
RFS       IDLE                  0          0          0
RFS       IDLE                  0          0          0
RFS       IDLE                  0          0          0
RFS       IDLE                  0          0          0
RFS       IDLE                  0          0          0
RFS       IDLE                  1      23392     467617
MRP0      APPLYING_LOG          1      23278     522838
ARCH      CLOSING               1      23391     114688
RFS       IDLE                  2      22696     150921
ARCH      CLOSING               2      22695     923648

25 rows selected.

SQL>

########################## On Primary ##########################

SQL> select thread#,max(sequence#) from v$archived_log where archived='YES' group by thread#;

   THREAD# MAX(SEQUENCE#)
---------- --------------
         1          23391
         2          22694

########################## On Standby ##########################

SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;

   THREAD# MAX(SEQUENCE#)
---------- --------------
         1          23390
         2          22694


Now, you could see that Standby is in almost Sync with Primary with the help of RMAN Incremental Backup.

Any Suggestions / Comments are welcome
-Manish