RSYNC Examples in AIX

Backing up data using rsync command

rsync is a great tool for backing up and restoring files. I’ll use some example to explain on how it works.

Example of the remote server and folder that needs to be backup or copied:
Remote host name: server01.comentum.com
Remote folder: /home/user01/
Remote user: user01

rsync example for backing up / copying from remote server to local Linux computer:
rsync -arv user01@server01.comentum.com:/home/user01/ /home/bob/user01backup/
(/home/bob/user01backup/ is a local Linux folder path)

rsync example for backing up / copying from remote server to local Mac computer:
rsync -arv user01@server01.comentum.com:/home/user01/ /Users/bob/user01backup/
(/Users/bob/user01backup/ is a local Mac folder path)

rsync example for backing up / copying from remote server to local Mac computer and external USB drive:
rsync -arv user01@server01.comentum.com:/home/user01/ /Volumes/westerndigital-usb/user01backup/
(/Volumes/westerndigital-usb/user01backup/ is an external USB Drive path on a local Mac computer)

Here is what the “-arv” option does:
a = archive – means it preserves permissions (owners, groups), times, symbolic links, and devices.
r = recursive – means it copies directories and sub directories
v = verbose – means that it prints on the screen what is being copied

More Examples:
rsync -rv user01@server01.comentum.com:/home/user01/ /home/bob/user01backup/
(This example will copy folders and sub-folder but will not preserve permissions, times and symbolic links during the transfer)

rsync -arv –exclude ‘logs’ user01@server01.comentum.com:/home/user01/ /Users/bob/user01backup/
(This example will copy everything (folders, sub-folders, etc), will preserver permissions, times, links, but will exclude the folder /home/user01/logs/ from being copied)

Use of “/” at the end of path:
When using “/” at the end of source, rsync will copy the content of the last folder.
When not using “/” at the end of source, rsync will copy the last folder and the content of the folder.

When using “/” at the end of destination, rsync will paste the data inside the last folder.
When not using “/” at the end of destination, rsync will create a folder with the last destination folder name and paste the data inside that folder.

ln -s /usr/bin/rsync /usr/local/bin/rsync

		
OPTIONS SUMMARY
       Here is a short summary of the options available in rsync. 

        -v, --verbose               increase verbosity
        -q, --quiet                 suppress non-error messages
            --no-motd               suppress daemon-mode MOTD (see caveat)
        -c, --checksum              skip based on checksum, not mod-time & size
        -a, --archive               archive mode; equals -rlptgoD (no -H,-A,-X)
            --no-OPTION             turn off an implied OPTION (e.g. --no-D)
        -r, --recursive             recurse into directories
        -R, --relative              use relative path names
            --no-implied-dirs       don’t send implied dirs with --relative
        -b, --backup                make backups (see --suffix & --backup-dir)
            --backup-dir=DIR        make backups into hierarchy based in DIR
            --suffix=SUFFIX         backup suffix (default ~ w/o --backup-dir)
        -u, --update                skip files that are newer on the receiver
            --inplace               update destination files in-place
            --append                append data onto shorter files
            --append-verify         --append w/old data in file checksum
        -d, --dirs                  transfer directories without recursing
        -l, --links                 copy symlinks as symlinks
        -L, --copy-links            transform symlink into referent file/dir
            --copy-unsafe-links     only "unsafe" symlinks are transformed
            --safe-links            ignore symlinks that point outside the tree
        -k, --copy-dirlinks         transform symlink to dir into referent dir
        -K, --keep-dirlinks         treat symlinked dir on receiver as dir
        -H, --hard-links            preserve hard links
        -p, --perms                 preserve permissions
        -E, --executability         preserve executability
            --chmod=CHMOD           affect file and/or directory permissions
        -A, --acls                  preserve ACLs (implies -p)
        -X, --xattrs                preserve extended attributes
        -o, --owner                 preserve owner (super-user only)
        -g, --group                 preserve group
            --devices               preserve device files (super-user only)
            --specials              preserve special files
        -D                          same as --devices --specials
        -t, --times                 preserve modification times
        -O, --omit-dir-times        omit directories from --times
            --super                 receiver attempts super-user activities
            --fake-super            store/recover privileged attrs using xattrs
        -S, --sparse                handle sparse files efficiently
        -n, --dry-run               perform a trial run with no changes made
        -W, --whole-file            copy files whole (w/o delta-xfer algorithm)
        -x, --one-file-system       don’t cross filesystem boundaries
        -B, --block-size=SIZE       force a fixed checksum block-size
        -e, --rsh=COMMAND           specify the remote shell to use
            --rsync-path=PROGRAM    specify the rsync to run on remote machine
            --existing              skip creating new files on receiver
            --ignore-existing       skip updating files that exist on receiver
            --remove-source-files   sender removes synchronized files (non-dir)
            --del                   an alias for --delete-during
            --delete                delete extraneous files from dest dirs
            --delete-before         receiver deletes before transfer (default)
            --delete-during         receiver deletes during xfer, not before
            --delete-delay          find deletions during, delete after
            --delete-after          receiver deletes after transfer, not before
            --delete-excluded       also delete excluded files from dest dirs
            --ignore-errors         delete even if there are I/O errors
            --force                 force deletion of dirs even if not empty
            --max-delete=NUM        don’t delete more than NUM files
            --max-size=SIZE         don’t transfer any file larger than SIZE
            --min-size=SIZE         don’t transfer any file smaller than SIZE
            --partial               keep partially transferred files
            --partial-dir=DIR       put a partially transferred file into DIR
            --delay-updates         put all updated files into place at end
        -m, --prune-empty-dirs      prune empty directory chains from file-list
            --numeric-ids           don’t map uid/gid values by user/group name
            --timeout=SECONDS       set I/O timeout in seconds
            --contimeout=SECONDS    set daemon connection timeout in seconds
        -I, --ignore-times          don’t skip files that match size and time
            --size-only             skip files that match in size
            --modify-window=NUM     compare mod-times with reduced accuracy
        -T, --temp-dir=DIR          create temporary files in directory DIR
        -y, --fuzzy                 find similar file for basis if no dest file
            --compare-dest=DIR      also compare received files relative to DIR
            --copy-dest=DIR         ... and include copies of unchanged files
            --link-dest=DIR         hardlink to files in DIR when unchanged
        -z, --compress              compress file data during the transfer
            --compress-level=NUM    explicitly set compression level
            --skip-compress=LIST    skip compressing files with suffix in LIST
        -C, --cvs-exclude           auto-ignore files in the same way CVS does
        -f, --filter=RULE           add a file-filtering RULE
        -F                          same as --filter=’dir-merge /.rsync-filter’
                                    repeated: --filter=’- .rsync-filter’
            --exclude=PATTERN       exclude files matching PATTERN
            --exclude-from=FILE     read exclude patterns from FILE
            --include=PATTERN       don’t exclude files matching PATTERN
            --include-from=FILE     read include patterns from FILE
            --files-from=FILE       read list of source-file names from FILE
        -0, --from0                 all *from/filter files are delimited by 0s
        -s, --protect-args          no space-splitting; wildcard chars only
            --address=ADDRESS       bind address for outgoing socket to daemon
            --port=PORT             specify double-colon alternate port number
            --sockopts=OPTIONS      specify custom TCP options
            --blocking-io           use blocking I/O for the remote shell
            --stats                 give some file-transfer stats
        -8, --8-bit-output          leave high-bit chars unescaped in output
        -h, --human-readable        output numbers in a human-readable format
            --progress              show progress during transfer
        -P                          same as --partial --progress
        -i, --itemize-changes       output a change-summary for all updates
            --out-format=FORMAT     output updates using the specified FORMAT
            --log-file=FILE         log what we’re doing to the specified FILE
            --log-file-format=FMT   log updates using the specified FMT
            --password-file=FILE    read daemon-access password from FILE
            --list-only             list the files instead of copying them
            --bwlimit=KBPS          limit I/O bandwidth; KBytes per second
            --write-batch=FILE      write a batched update to FILE
            --only-write-batch=FILE like --write-batch but w/o updating dest
            --read-batch=FILE       read a batched update from FILE
            --protocol=NUM          force an older protocol version to be used
            --iconv=CONVERT_SPEC    request charset conversion of filenames
            --checksum-seed=NUM     set block/file checksum seed (advanced)
        -4, --ipv4                  prefer IPv4
        -6, --ipv6                  prefer IPv6
            --version               print version number
       (-h) --help                  show this help (see below for -h comment)

       Rsync can also be run as a daemon, in which case the following options are accepted:
       Rsync can also be run as a daemon, in which case the following options are accepted:

            --daemon                run as an rsync daemon
            --address=ADDRESS       bind to the specified address
            --bwlimit=KBPS          limit I/O bandwidth; KBytes per second
            --config=FILE           specify alternate rsyncd.conf file
            --no-detach             do not detach from the parent
            --port=PORT             listen on alternate port number
            --log-file=FILE         override the "log file" setting
            --log-file-format=FMT   override the "log format" setting
            --sockopts=OPTIONS      specify custom TCP options
        -v, --verbose               increase verbosity
        -4, --ipv4                  prefer IPv4
        -6, --ipv6                  prefer IPv6
        -h, --help                  show this help (if used after --daemon)
Posted in OS-AIX | Tagged | Leave a comment

Backup Rman Incremental SCRIPTS Use Name days for tag

Set retention policy to recovery window to 7 days.

CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;

Script for Backup Level 0

backup_0.sh

ORACLE_BASE=/oracle
ORACLE_HOME=$ORACLE_BASE/orahome
ORACLE_SID=orcl

export ORACLE_BASE ORACLE_HOME ORACLE_SID
export PATH=$PATH:/$ORACLE_HOME/bin

rman target=’system/bagbag’ cmdfile=’/oracle/rman_script/rman_backup0.cmd’ log=’/oracle/rman_script/backup-level0.log’

currentdate=`date ‘+%y%m%d_%H%M’`

file_name=hr_db_DC$currentdate.dmp
file_log=hr_db_DC$currentdate.log

expdp “‘/ as sysdba'” directory=bck_dir schemas=HR_DB LOGFILE=$file_log DUMPFILE=$file_name

compress /backup_oracle/$file_name
find /backup_oracle -name *.log -ctime +6 -exec rm -f {} \;
find /backup_oracle -name *.Z -ctime +6 -exec rm -f {} \;
exit

rman_backup0.cmd

connect catalog rman/pass;
crosscheck archivelog all;
crosscheck backup;

RUN {
allocate channel c1 device type disk;
allocate channel c2 device type disk;
allocate channel c3 device type disk;
allocate channel c4 device type disk;
SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘/backupsystem/fra/orcl/autobackup/CL_SUNDAY_%F.bck’;
BACKUP tag=’SUNDAY_LVL0′ AS COMPRESSED backupset INCREMENTAL LEVEL 0 DATABASE plus archivelog;
delete noprompt archivelog until time ‘sysdate-1’ backed up 1 times to device type disk;
delete force noprompt obsolete;
SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘%F’;
release channel c1;
release channel c2;
release channel c3;
release channel c4;
}
delete noprompt expired archivelog all;
delete noprompt expired backup;

 

Script for Backup Level 1

backup_1.sh

ORACLE_BASE=/oracle
ORACLE_HOME=$ORACLE_BASE/orahome
ORACLE_SID=orcl

export ORACLE_BASE ORACLE_HOME ORACLE_SID
export PATH=$PATH:/$ORACLE_HOME/bin

_dow=”$(date +’%A’)”
_logfile=”/oracle/rman_script/rman_backup${_dow}.log”
_cmdfile=”/oracle/rman_script/rman_backup${_dow}.cmd”
#echo $_cmdfile
#echo $_logfile

rman target=’system/bagbag’ cmdfile=$_cmdfile log=$_logfile
currentdate=`date ‘+%y%m%d_%H%M’`

file_name=hr_db_DC$currentdate.dmp
file_log=hr_db_DC$currentdate.log

expdp “‘/ as sysdba'” directory=hr_db_dir schemas=hr_db LOGFILE=$file_log DUMPFILE=$file_name

compress /backup_oracle/$file_name
find /backup_oracle -name *.log -ctime +6 -exec rm -f {} \;
find /backup_oracle -name *.Z -ctime +6 -exec rm -f {} \;

rman_backupMonday.cmd

connect catalog rman/pass;
RUN {
allocate channel D1 type disk;
SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘/backupsystem/fra/orcl/autobackup/CL_MONDAYDAY_%F.bck’;
BACKUP TAG=’MONDAY_LVL1′ AS COMPRESSED backupset INCREMENTAL LEVEL 1 DATABASE plus archivelog;
delete noprompt archivelog until time ‘sysdate-1’ backed up 1 times to device type disk;
delete force noprompt obsolete;
SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘%F’;
release channel D1;
}

what you have to do is replace rman_backupMonday.cmd with another day name.

example rman_backupTuesday.cmd …etc.

and replace tag name BACKUP TAG=’MONDAY_LVL1′ become BACKUP TAG=’TOESDAY_LVL1′

controlfile backup

SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘/backupsystem/fra/orcl/autobackup/CL_MONDAYDAY_%F.bck’;

become

SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘/backupsystem/fra/orcl/autobackup/CL_TUESDAYDAY_%F.bck’;

Posted in ORACLE, RMAN, Uncategorized | Tagged | Leave a comment

Import Data Pump performance problem

The import with DataPump of tables into an Oracle11g Release 11.2.0.4 database takes almost 45 minutes, while the same import into Oracle10g Release 10.2.0.4 only take 2 minutes.
The slow Import Data Pump performance problem might also reproduce in other Oracle11g Releases (like: 11.2.0.3) or Oracle12c Release 12.1.0.1.
The problem also reproduces when running an Export Data Pump job from such a release database.

The instance was started with following parameters (other configurations also possible):

 sga_target           = 5G
shared_pool_size     = 0
shared_pool_reserved_size = 48653926
large_pool_size      = 0
java_pool_size       = 0
streams_pool_size    = 0
compatible           = “11.2.0.4”
pga_aggregate_target = 2G

An SQL trace file was created with:

ALTER SYSTEM SET events ‘10046 trace name context forever, level 12’;

— run import

ALTER SYSTEM SET events ‘10046 trace name context off’;

The Data Pump Master and Worker trace file were analyzed with:

$ tkprof ORCL_dw00_11613.trc ORCL_dw00_11613_exe.out waits=y sort=exeela

showing:

...
Elapsed times include waiting on following events:
   Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  wait for unread message on broadcast channel
                                                415        1.00        404.95
  Streams AQ: enqueue blocked on low memory
                                                 12       60.00        720.10
  db file scattered read                          1        0.01          0.01
  Disk file operations I/O                       27        0.04          0.16
  direct path read                               12        0.00          0.01
  db file sequential read                        64        0.01          0.10
  control file sequential read                   35        0.00          0.04
  library cache: mutex X                          1        0.00          0.00
  direct path write                            1931        0.51         17.58
  KSV master wait                                 1        0.00          0.00

and:

$ tkprof ORCL_dm00_10978.trc ORCL_dm00_10978_exe.out waits=y sort=exeela

showing:

...
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  wait for unread message on broadcast channel
                                                698        1.00        684.41
  Disk file operations I/O                        2        0.00          0.00
  Streams AQ: enqueue blocked on low memory
                                                 21       60.00       1201.93
  db file sequential read                        28        0.00          0.03
  class slave wait                                1        0.00          0.00
  library cache: mutex X                          1        0.00          0.00
  direct path write                               2        0.00          0.00
  reliable message                               20        0.00          0.00
  db file scattered read                          1        0.00          0.00
  enq: RO - fast object reuse                     9        0.01          0.03
  enq: CR - block range reuse ckpt                9        0.00          0.05

The tkprof output files clearly show that most time was waited on:

Streams AQ: enqueue blocked on low memory

 

CAUSE

1. If the size of the streams_pool is being modified, then this can result in excessive waits for ‘Streams AQ: enqueue blocked on low memory’.
Every occurrence for this event causes a 1 minutes delay.

Reference:
Bug 17365043 – “STREAMS AQ: ENQUEUE BLOCKED ON LOW MEMORY” WHEN REDUCING STREAMS_POOL_SIZE , fixed in 12.2.0.1

2. The fix for Bug 17365043 will fix the majority of the issues where ‘Streams AQ: enqueue blocked on low memory‘ is reported. If the problem is not fixed, then there could be a relation to next issues:

  • Unpublished Bug 18828868 – EXPLOSION IN AQ BACKGROUND SLAVES RESULTING IN ORA-18/ORA-20
    Initially fixed in: 12.1.0.2 and 12.2.0.1
    However, the fix for this bug was actually superseded by the fix for below mentioned Bug 21286665.

    For details, see also:
    Note 1990633.1 – Expdp Is Very Slow After Upgrade From 11.2.0.3 To 11.2.0.4 With Wait On AQ: enqueue blocked on low memory

  • Bug 21286665  – EXPORT IS SLOW WAITING FOR “STREAMS AQ: ENQUEUE BLOCKED ON LOW MEMORY after install patch 17365043
    superseded by the fix in unpublished Bug 24560906 – HIGH CPU USAGE FOR PROCESS ORA_Q001_DBT11 AND ORA_Q007_DBT11.

 

SOLUTION

1. As a temporary workaround, you can restart the database, and re-run the Data Pump job. The problem usually reproduces again after some database uptime.

– OR –

2. For a more permanent workaround, explicitly set the streams_pool_size to a fixed (large enough) value, e.g. 150 Mb (or 300 Mb if needed) that will be used as a minimum value, e.g.:

CONNECT / as sysdba
ALTER SYSTEM SET streams_pool_size=150m SCOPE=both;

And re-run the import or Export or Import Data Pump job.

If you cannot modify the STREAMS_POOL_SIZE dynamically, then you need to set the value in the spfile, and restart the database.  Note that you may have to increase your SGA_TARGET or MEMORY_TARGET also.

CONNECT / as sysdba
ALTER SYSTEM SET streams_pool_size=150m SCOPE=spfile;
SHUTDOWN IMMEDIATE
STARTUP

 

NOTE:
Instead of starting a new job, you can suspend the job and resume the same job after setting streams pool, restart of instance and setting the under score parameter.
This will save time when facing the issue in the middle of a Datapump operation.

— OR —

3. Apply interim Patch 17365043 if available for your platform and Oracle version, and re-run the Export Data Pump job.

NOTE:

If the problem is not fixed after implementing one of the above solutions, a fix for unpublished Bug 24560906 must be also installed before reporting the issue to Oracle Support.
Possible solutions for unpublished Bug 24560906 are:

1.  In addition to setting a minimum value for STREAMS_POOL_SIZE to avoid Bug 17365043, set next parameter as a workaround for unpublished Bug 24560906:

alter system set “_disable_streams_pool_auto_tuning”=TRUE;
SHUTDOWN IMMEDIATE
STARTUP

— OR —

2. Apply interim Patch 24560906 if available for your platform and Oracle version, and re-run the Export Data Pump job.

Use the MOS Patch Planner Tool to check for conflicting patches.
For details, see:
Note 1317012.1 – How To Use MOS Patch Planner To Check And Request The Conflict Patches?

Please contact Oracle Support for a backport or merge request if no patch exists for your Oracle version.

https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl=&_afrLoop=168697143354393&id=1596645.1&displayIndex=3&_afrWindowMode=0&_adf.ctrl-state=ss5jbg15b_4

 

REFERENCES

BUG:17365043 – “STREAMS AQ: ENQUEUE BLOCKED ON LOW MEMORY” WHEN REDUCING STREAMS_POOL_SIZE
BUG:21286665 – EXPORT IS SLOW WAITING FOR “STREAMS AQ: ENQUEUE BLOCKED ON LOW MEMORY”
NOTE:1990633.1 – Expdp Is Very Slow After Upgrade From 11.2.0.3 To 11.2.0.4 With Wait On AQ: enqueue blocked on low memory
BUG:19014757 – DATAPUMP VERY SLOW WITH STREAMS AQ: ENQUEUE BLOCKED ON LOW MEMORY

Posted in ORACLE | Tagged | Leave a comment

How to Handle RMAN – 06023

Part of my daily tasks is to perform RMAN backup and recovery scenarios in my test environment , to be familiar with every possible situation that can happen.

During performing this task  I have an error Rman-06023

I have backup files of the database:

$ ls -l /oracle/backup/*
ARCH_OTP_20101117_0tlt6lum_s8221_p1
controlf_OTP_0ult6mop_8222_20101117
FULL_OTP_0rlt6ip4
spfile_OTP_0vlt6mos_8223_20101117

Restore controlfile from backup and mount database:

$ rman target /
Recovery Manager: Release 10.2.0.4.0 - Production on Thu Nov 18 08:32:09 2010
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
connected to target database (not started)

RMAN> startup nomount;
Oracle instance started
Total System Global Area     838860800 bytes
Fixed Size                     2087672 bytes
Variable Size                750781704 bytes
Database Buffers              67108864 bytes
Redo Buffers                  18882560 bytes

RMAN> run
2> {
3> allocate channel c1 device type disk;
4> restore controlfile from '/oracle/backup/controlf_OTP_0ult6mop_8222_20101117';
5> }

Starting restore at 18.11.2010 08:40:51

channel c1: restoring control file
channel c1: restore complete, elapsed time: 00:00:01
output filename=/oracle/product/10.2.0/db_1/dbs/control01.ctl
output filename=/oracle/product/10.2.0/db_1/dbs/control02.ctl
Finished restore at 18.11.2010 08:40:52
released channel: c1

RMAN> alter database mount;
database mounted

I will delete all EXPIRED backups from RMAN repository:

RMAN> crosscheck backup;
RMAN> delete noprompt expired backup;

RMAN> crosscheck archivelog all;
RMAN> delete noprompt expired archivelog all;

For this case it is very important to mention that I’ve had enabled controlfile autobackup (CONFIGURE CONTROLFILE AUTOBACKUP ON) in older incarnations. With this parameter enabled RMAN automatically takes backup of controlfile and server parameter file whenever the database structure metadata in the control file changes or whenever a backup or copy operation is performed using RMAN.

List backup command shows that there are two old autobackup files in flash recovery area.

RMAN> list backup;


List of Backup Sets
===================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
8199    Full    7.89M      DISK        00:00:00     16.06.2010 14:50:28
        BP Key: 8199   Status: AVAILABLE  Compressed: NO  Tag: TAG20100616T145028
        Piece Name: /oracle/oradata/archive/flash_recovery_area/OTP/autobackup/2010_06_16/o1_mf_s_721839028_61kl4o2y_.bkp
  Control File Included: Ckp SCN: 43471045788   Ckp time: 16.06.2010 14:50:28
  SPFILE Included: Modification time: 16.06.2010 13:15:03

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
8200    Full    7.89M      DISK        00:00:00     16.06.2010 14:49:49
        BP Key: 8200   Status: AVAILABLE  Compressed: NO  Tag: TAG20100616T144949
        Piece Name: /oracle/oradata/archive/flash_recovery_area/OTP/autobackup/2010_06_16/o1_mf_s_721838989_61kl3hhq_.bkp
  Control File Included: Ckp SCN: 43471045571   Ckp time: 16.06.2010 14:49:49
  SPFILE Included: Modification time: 16.06.2010 13:15:03

To continue with my restore process I will catalog my backup files noted before.

RMAN> catalog start with '/oracle/backup/';

searching for all files that match the pattern /oracle/backup/

List of Files Unknown to the Database
=====================================
File Name: /oracle/backup/FULL_OTP_0rlt6ip4
File Name: /oracle/backup/ARCH_OTP_20101117_0tlt6lum_s8221_p1
File Name: /oracle/backup/controlf_OTP_0ult6mop_8222_20101117

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /oracle/backup/FULL_OTP_0rlt6ip4
File Name: /oracle/backup/ARCH_OTP_20101117_0tlt6lum_s8221_p1
File Name: /oracle/backup/controlf_OTP_0ult6mop_8222_20101117

Check backup of database.

RMAN> list backup of database;


List of Backup Sets
===================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
8196    Incr 0  11.18G     DISK        00:53:38     17.11.2010 01:53:58
        BP Key: 8201   Status: AVAILABLE  Compressed: YES  Tag: WEEKLY_FULL
        Piece Name: /oracle/backup/FULL_OTP_0rlt6ip4
  List of Datafiles in backup set 8196
  File LV Type Ckp SCN    Ckp Time            Name
  ---- -- ---- ---------- ------------------- ----
  1    0  Incr 48852276752 17.11.2010 01:00:20 /u02/oradata/system/otp/system01.dbf
  2    0  Incr 48852276752 17.11.2010 01:00:20 /u02/oradata/data02/otp/users02.dbf
  3    0  Incr 48852276752 17.11.2010 01:00:20 /u02/oradata/system/otp/sysaux01.dbf
  4    0  Incr 48852276752 17.11.2010 01:00:20 /u02/oradata/data02/otp/users01.dbf
  5    0  Incr 48852276752 17.11.2010 01:00:20 /u02/oradata/index/otp/tools01.dbf
  6    0  Incr 48852276752 17.11.2010 01:00:20 /u02/oradata/data02/otp/users03.dbf
  7    0  Incr 48852276752 17.11.2010 01:00:20 /u02/oradata/system/otp/undotbs01.dbf
  8    0  Incr 48852276752 17.11.2010 01:00:20 /u02/oradata/system/otp/cluser_tbs01.dbf

Restore database files from backup to another location.

RMAN> run
2> {
3> allocate channel c1 device type disk;
4> SET NEWNAME FOR DATAFILE 1 TO '/oracle/u02/oradata/system/otp/system01.dbf';
5> SET NEWNAME FOR DATAFILE 2 TO '/oracle/u02/oradata/data02/otp/users02.dbf';
6> SET NEWNAME FOR DATAFILE 3 TO '/oracle/u02/oradata/system/otp/sysaux01.dbf';
7> SET NEWNAME FOR DATAFILE 4 TO '/oracle/u02/oradata/data02/otp/users01.dbf';
8> SET NEWNAME FOR DATAFILE 5 TO '/oracle/u02/oradata/index/otp/tools01.dbf';
9> SET NEWNAME FOR DATAFILE 6 TO '/oracle/u02/oradata/data02/otp/users03.dbf';
10> SET NEWNAME FOR DATAFILE 7 TO '/oracle/u02/oradata/system/otp/undotbs01.dbf';
11> SET NEWNAME FOR DATAFILE 8 TO '/oracle/u02/oradata/system/otp/cluser_tbs01.dbf';
12>
13> RESTORE DATABASE;
14> SWITCH DATAFILE ALL;
15> RECOVER DATABASE;
16> }

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: sid=321 devtype=DISK

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 18.11.2010 08:46:21

released channel: c1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 11/18/2010 08:46:22
RMAN-06026: some targets not found - aborting restore
RMAN-06023: no backup or copy of datafile 4 found to restore
RMAN-06023: no backup or copy of datafile 3 found to restore
RMAN-06023: no backup or copy of datafile 1 found to restore

Hm…
For me this was unexpected error because I had database backup available.

RMAN> list backup of datafile 4;


List of Backup Sets
===================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
8196    Incr 0  11.18G     DISK        00:53:38     17.11.2010 01:53:58
        BP Key: 8201   Status: AVAILABLE  Compressed: YES  Tag: WEEKLY_FULL
        Piece Name: /oracle/backup/FULL_OTP_0rlt6ip4
  List of Datafiles in backup set 8196
  File LV Type Ckp SCN    Ckp Time            Name
  ---- -- ---- ---------- ------------------- ----
  4    0  Incr 48852276752 17.11.2010 01:00:20 /u02/oradata/data02/otp/users01.dbf

RMAN> list backup of datafile 3;


List of Backup Sets
===================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
8196    Incr 0  11.18G     DISK        00:53:38     17.11.2010 01:53:58
        BP Key: 8201   Status: AVAILABLE  Compressed: YES  Tag: WEEKLY_FULL
        Piece Name: /oracle/backup/FULL_OTP_0rlt6ip4
  List of Datafiles in backup set 8196
  File LV Type Ckp SCN    Ckp Time            Name
  ---- -- ---- ---------- ------------------- ----
  3    0  Incr 48852276752 17.11.2010 01:00:20 /u02/oradata/system/otp/sysaux01.dbf

…
…
…

After two hours of searching for answers and performing more tests I ran to the Metalink document 965122.1.

The problem were those autobackups in FRA that I showed before. That files belonged to different incarnation than the available backups current incarnation.

[From Metalink document]
If we start a RESTORE database with a BACKUP controlfile and Flash Recovery Area is defined, RMAN execute and implicit crosscheck and catalog of all the objects in the Flash Recovery Area.

RMAN will catalog any objects in the Flash Recovery Area that will not be registered in the controlfile and if any of this files belongs to an incarnation different from CURRENT incarnation in the controlfile then changes controlfile CURRENT incarnation to the one found in the file that is being cataloged.

This prevents database from restoring backups that belong to old CURRENT incarnation.
RMAN considers backup availble for being restored if the backup incarnation and CURRENT incarnation in controlfile are the same.

So I’ve decided to temporary disable FRA during recovery process commenting db_recovery_file_dest and db_recovery_file_dest_size parameters in parameter file.

RMAN> shutdown abort;

Oracle instance shut down

$ vi $ORACLE_HOME/dbs/initotp.ora
...
#*.db_recovery_file_dest='/oracle/oradata/archive/flash_recovery_area'
#*.db_recovery_file_dest_size=2147483648
…
...

$ sqlplus "/as sysdba"

SQL> startup nomount pfile='$ORACLE_HOME/dbs/initotp.ora';
ORACLE instance started.

Total System Global Area  838860800 bytes
Fixed Size                  2087672 bytes
Variable Size             750781704 bytes
Database Buffers           67108864 bytes
Redo Buffers               18882560 bytes

Restore and recover database.

RMAN> run
2> {
3> allocate channel c1 device type disk;
4> restore controlfile from '/oracle/backup/controlf_OTP_0ult6mop_8222_20101117';
5> }
…
…
…
RMAN> alter database mount;
database mounted

RMAN> catalog start with '/oracle/backup';
…
…

RMAN> run
2> {
3> allocate channel c1 device type disk;
4> SET NEWNAME FOR DATAFILE 1 TO '/oracle/u02/oradata/system/otp/system01.dbf';
5> SET NEWNAME FOR DATAFILE 2 TO '/oracle/u02/oradata/data02/otp/users02.dbf';
6> SET NEWNAME FOR DATAFILE 3 TO '/oracle/u02/oradata/system/otp/sysaux01.dbf';
7> SET NEWNAME FOR DATAFILE 4 TO '/oracle/u02/oradata/data02/otp/users01.dbf';
8> SET NEWNAME FOR DATAFILE 5 TO '/oracle/u02/oradata/index/otp/tools01.dbf';
9> SET NEWNAME FOR DATAFILE 6 TO '/oracle/u02/oradata/data02/otp/users03.dbf';
10> SET NEWNAME FOR DATAFILE 7 TO '/oracle/u02/oradata/system/otp/undotbs01.dbf';
11> SET NEWNAME FOR DATAFILE 8 TO '/oracle/u02/oradata/system/otp/cluser_tbs01.dbf';
12>
13> RESTORE DATABASE;
14> SWITCH DATAFILE ALL;
15> RECOVER DATABASE;
16> }

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: sid=321 devtype=DISK

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 18.11.2010 09:01:48

channel c1: starting datafile backupset restore
channel c1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to /oracle/u02/oradata/system/otp/system01.dbf
restoring datafile 00002 to /oracle/u02/oradata/data02/otp/users02.dbf
restoring datafile 00003 to /oracle/u02/oradata/system/otp/sysaux01.dbf
restoring datafile 00004 to /oracle/u02/oradata/data02/otp/users01.dbf
restoring datafile 00005 to /oracle/u02/oradata/index/otp/tools01.dbf
restoring datafile 00006 to /oracle/u02/oradata/data02/otp/users03.dbf
restoring datafile 00007 to /oracle/u02/oradata/system/otp/undotbs01.dbf
restoring datafile 00008 to /oracle/u02/oradata/system/otp/cluser_tbs01.dbf
channel c1: reading from backup piece /var/umoracle/otp/fullbkp_dir/FULL_OTP_0rlt6ip4
channel c1: restored backup piece 1
failover to piece handle=/mnt/l01dbdev-s01storage1/oracle/backup/FULL_OTP_0rlt6ip4 tag=WEEKLY_FULL
channel c1: restore complete, elapsed time: 01:28:51
Finished restore at 18.11.2010 10:30:40

datafile 1 switched to datafile copy
input datafile copy recid=9 stamp=735388241 filename=/oracle/u02/oradata/system/otp/system01.dbf
datafile 2 switched to datafile copy
input datafile copy recid=10 stamp=735388241 filename=/oracle/u02/oradata/data02/otp/users02.dbf
datafile 3 switched to datafile copy
input datafile copy recid=11 stamp=735388241 filename=/oracle/u02/oradata/system/otp/sysaux01.dbf
datafile 4 switched to datafile copy
input datafile copy recid=12 stamp=735388241 filename=/oracle/u02/oradata/data02/otp/users01.dbf
datafile 5 switched to datafile copy
input datafile copy recid=13 stamp=735388241 filename=/oracle/u02/oradata/index/otp/tools01.dbf
datafile 6 switched to datafile copy
input datafile copy recid=14 stamp=735388242 filename=/oracle/u02/oradata/data02/otp/users03.dbf
datafile 7 switched to datafile copy
input datafile copy recid=15 stamp=735388242 filename=/oracle/u02/oradata/system/otp/undotbs01.dbf
datafile 8 switched to datafile copy
input datafile copy recid=16 stamp=735388242 filename=/oracle/u02/oradata/system/otp/cluser_tbs01.dbf

Starting recover at 18.11.2010 10:30:43

starting media recovery

channel c1: starting archive log restore to default destination
channel c1: restoring archive log
archive log thread=1 sequence=33610
channel c1: reading from backup piece /var/otp/fullbkp_dir/ARCH_OTP_20101117_0tlt6lum_s8221_p1
channel c1: restored backup piece 1
failover to piece handle=/oracle/backup/ARCH_OTP_20101117_0tlt6lum_s8221_p1 tag=ARCHIVELOGS
channel c1: restore complete, elapsed time: 00:11:44
archive log filename=/oracle/oradata/archive/arch_1_33610_586538926.arc thread=1 sequence=33610
unable to find archive log
archive log thread=1 sequence=33611
released channel: c1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 11/18/2010 10:50:16
RMAN-06054: media recovery requesting unknown log: thread 1 seq 33611 lowscn 48852472202
RMAN> exit


$ sqlplus "/as sysdba"

SQL> recover database using backup controlfile until cancel;
ORA-00279: change 48852472202 generated at 11/17/2010 01:54:16 needed for
thread 1
ORA-00289: suggestion :
/oracle/oradata/archive/arch_1_33611_586538926.arc
ORA-00280: change 48852472202 for thread 1 is in sequence #33611

Specify log: {=suggested | filename | AUTO | CANCEL}
CANCEL
Media recovery cancelled.

SQL> alter database open resetlogs;
Database altered.

I’m glad that this problem occurred now in test environment because those two hours spent on searching for answers could be critical in recovery of a production database.

You never know 🙂

Posted in ORACLE | Tagged | Leave a comment

Daily Activities of Oracle Database Administrator

Daily Work of DBA:

Check the Database Availability
Check the Listener Availability
Monitoring space Availability
Viewing the Alert log file to analyze the ORA errors if any.
Check tablespaces should not be used more that 95%.
Viewing the temporary files, through the DBA_TEMP_FILES.
Daily resolving the tickets by using Remedy ticketing tool.
Taking backups by using RMAN on daily and weekly basis.
Check all last night backups were successful.
Health check of the Database.
Rebuilding of Indexes, if bulk load of data is inserted.
Backing up the archive log files.
Monitoring Backups.
Take a COLD/RMAN backups at night time.
Validate Database backup or Archive backup
Find objects which is going to reach max extents
Database Health Check
CPU, Processor, Memory usage

Weekly Work of DBA:

1: Growth of the Database.
2: Total full backup of the database through hot backup.
3: Taking logical backups, in case of physical backups failure.
4: Taking weekly Tablespace backup.

Monthly Work of DBA:

1: Index Rebuild.
2: Tablespace Reorganization.

Quarterly Work of DBA:

1: Patching
2: Database Reorganization

Posted in ORACLE | Leave a comment

Deleting the Archive log Using RMAN

Once you are connected, then you can use the following to delete archivelogs with one of the following commands. You can choose the commands depending on your situation.

  1. To delete all Archivelogs withou backing them up:
delete archivelog all;
  1. To backup all archivelogs which are not already backed up atleast one time –And then perfome deletion
delete archivelog all backed up 1 times to device type disk;
  1. To delete archivelog upto a time which are not already backed up.

If they are already backed up, RMAN will juste delete them. If they are not backed up atleast one time, then RMAN will back them up first before deleting

delete archivelog until time ’2007-04-30:09:30:50′ backed up 1 times to device type disk;
  1. To delete archivelog upto a time (without backing them up):
delete archivelog until time “to_date(‘OCT 31 2008 23:00:00′,’Mon DD YYYY HH24:MI:SS’)”;
  1. To delete archivelog two days or older which are not already backed up.

[If they are already backed up, RMAN will juste delete them. If they are not backed up atleast one time, then RMAN will back them up first before deleting]

delete archivelog until time ‘sysdate-2′ backed up 1 times to device type disk;
  1. To delete archivelogs “immediately” which are older than 2 days
delete force archivelog until time ‘sysdate-2′;

 

  1. To delete all archivelogs that are not backed up atleast one time, and then delete them
backup archivelog all not backed up 1 times delete input;

 

Note-1: you can use “noprompt’ if you dont want RMAN to prompt you before deleting the archivelogs.

Note-2: you can use “force” if archivelogs are missing from the OS and you want to clean out the RMAN repository.

Posted in ORACLE, RMAN | Tagged | Leave a comment

RMAN Delete old archivelogs

delete force noprompt archivelog all completed before 'sysdate-7';
If you want to delete archive logs older than few hours:
Verify the time like how many hours older than you want to delete.
SQL> alter session set nls_date_format='MM/DD/YYYY HH24:MI:SS';
Session altered.
SQL> select sysdate-1/11 from dual;

SYSDATE-1/11
-------------------
12/08/2013 08:33:05
SQL> SQL>
SQL> select sysdate from dual;
SYSDATE
-------------------
12/08/2013 10:44:15
SQL>
So sysdate-1/11 will delete the files older than 8:30 am.
RMAN>connect target /
delete noprompt archivelog until time '(sysdate-1/11)' backed up 1 times to device type sbt_tape;
or
RMAN> delete force noprompt obsolete;
RMAN> delete force noprompt expired;
Posted in ORACLE, RMAN | Tagged | Leave a comment

Oracle Rman Delete archivelog file older than N-day

rman target /
run{
allocate channel D1 type disk;
delete noprompt archivelog until time ‘SYSDATE-<N>‘;
release channel D1;
} ;

Example 2 day
run{
allocate channel D1 type disk;
delete noprompt archivelog until time ‘SYSDATE-2’;
release channel D1;
} ;

Posted in ORACLE, RMAN | Tagged | Leave a comment

Containers Instead of Virtual Machines

The last couple of years have seen the rise of software products that leverage Linux Containers (LXC) to deploy many instances of an application on one operating system. Running applications in LXC is an alternative to running applications in a virtual machine. However, LXC is not a new technology; it was first released in 2008 and is viewed as relatively mature. In fact, containerization as a concept existed earlier in Solaris Zones (2005) and AIX Workload partitioning (2007).

Containers are easy to deploy and consume very little resources as compared to a virtual machine. Where virtual machines abstract operating systems from the underlying hardware, containers abstract applications from the operating system. Containers make it possible to run many copies of the same or similar applications on top of one operating system, thus using a single copy of all the operating system files and shared libraries.

 

From an efficiency standpoint, this is obviously a huge win; it also has the benefit of ensuring consistency across the applications since they’re sharing the dependencies rather than each application having a potentially disparate version of the dependency. LXC developers are now expanding on the platform with LXD, a new way of interacting with containers which exposes a REST API. This will allow much greater orchestration of containers moving forward. As containers become more popular, the distribution of data center software will likely focus more on containers than on virtual machines.

Posted in OS, Uncategorized | Leave a comment

Move or Rename the Tempfile in Oracle

1. Login as SYSDBA
2. Check the Tempfile status

SQL> SELECT v.file#, t.file_name, v.status
 2  FROM dba_temp_files t, v$tempfile v
 3  WHERE t.file_id = v.file#;

 FILE#     FILE_NAME                 STATUS
---------- ------------------------- -------
 1         /u01/temp01.dbf           ONLINE
 2         /u02/temp02.dbf           ONLINE
 3         /u02/temp03.dbf           ONLINE

3. Make OFFLINE the tempfile that need to move

SQL> ALTER DATABASE TEMPFILE '/u01/temp01.dbf' OFFLINE;
Database altered.

ATTENTION :
We should make offline the tempfile that need to move, if not we will get this error :

SQL> ALTER DATABASE RENAME FILE '/u01/temp01.dbf' TO '/u02/temp01.dbf';
ALTER DATABASE RENAME FILE '/u01/temp01.dbf' TO '/u02/temp01.dbf'
*
ERROR at line 1:
ORA-01511: error in renaming log/data files
ORA-01121: cannot rename database file 1026 - file is in use or recovery
ORA-01110: data file 1026: '/u01/temp02.dbf'

4. Check the Tempfile status

SQL> SELECT v.file#, t.file_name, v.status
 2  FROM dba_temp_files t, v$tempfile v
 3  WHERE t.file_id = v.file#;

 FILE#     FILE_NAME                 STATUS
---------- ------------------------- -------
 1         /u01/temp01.dbf           OFFLINE
 2         /u02/temp02.dbf           ONLINE
 3         /u02/temp03.dbf           ONLINE

5. Copy the tempfile that need to move

SQL> !cp -p /u01/temp01.dbf /u02/temp01.dbf

ATTENTION :
Dont forget to copy it first before rename it, if not you will get this error :

SQL> ALTER DATABASE RENAME FILE '/u01/temp01.dbf' TO '/u02/temp01.dbf';
ALTER DATABASE RENAME FILE '/u01/temp01.dbf' TO '/u02/temp01.dbf'
*
ERROR at line 1:
ORA-01511: error in renaming log/data files
ORA-01141: error renaming data file 1025 - new file '/u02/temp01.dbf' not found
ORA-01110: data file 1025: '/u01/temp01.dbf'
ORA-27037: unable to obtain file status
HP-UX Error: 2: No such file or directory
Additional information: 3

6. Rename Tempfile that already move to other location

SQL> ALTER DATABASE RENAME FILE '/u01/temp01.dbf' TO '/u02/temp01.dbf';
Database altered.

7. Check the Tempfile status after moved to other location

SQL> SELECT v.file#, t.file_name, v.status
 2  FROM dba_temp_files t, v$tempfile v
 3  WHERE t.file_id = v.file#;

 FILE#     FILE_NAME                 STATUS
---------- ------------------------- -------
 1         /u02/temp01.dbf           OFFLINE
 2         /u02/temp02.dbf           ONLINE
 3         /u02/temp03.dbf           ONLINE

8. Make ONLINE the tempfile that need to move

SQL> ALTER DATABASE TEMPFILE '/u02/temp01.dbf' ONLINE;
Database altered.

9. Check the Tempfile status after moved

SQL> SELECT v.file#, t.file_name, v.status
 2  FROM dba_temp_files t, v$tempfile v
 3  WHERE t.file_id = v.file#;

 FILE#     FILE_NAME                 STATUS
---------- ------------------------- -------
 1         /u02/temp01.dbf           ONLINE
 2         /u02/temp02.dbf           ONLINE
 3         /u02/temp03.dbf           ONLINE

10. Remove the old Tempfile

SQL> !rm -rf /u01/temp01.dbf
Posted in ORACLE | Tagged | Leave a comment