Tuesday, August 27, 2013

Find index skewed, rebuild

Find index skewed, rebuild
                                                

Summary
It is important to periodically examine your indexes to determine if they have become skewed and might need to be rebuilt. When an index is skewed, parts of an index are accessed more frequently than others. As a result, disk contention may occur, creating a bottleneck in performance.

The key column to decide index skewed is blevel. You must estimate statistics for the index or analyze index validate structure.

If the BLEVEL were to be more than 4, it is recommended to rebuild the index.
SELECT OWNER, INDEX_NAME, TABLE_NAME, LAST_ANALYZED, BLEVEL
FROM DBA_INDEXES 
WHERE OWNER NOT IN ('SYS', 'SYSTEM') 
AND BLEVEL >= 4
ORDER BY BLEVEL DESC;

Disk I/O

Disk I/O

Script

Datafiles Disk I/O
Tablespace Disk I/O
Which segments have top Logical I/O & Physical I/O
Which SQL are doing a lot of disk I/O

Which SQL are doing a lot of disk I/O

Which SQL are doing a lot of disk I/O
                                                

Which SQL are doing a lot of disk I/O
SELECT * FROM 
   (SELECT  SUBSTR(sql_text,1,500) SQL,  
            ELAPSED_TIME, CPU_TIME, 
            disk_reads, executions,  
            disk_reads/executions "Reads/Exec", 
            hash_value,address   
     FROM V$SQLAREA   
     WHERE  ( hash_value, address ) IN (  
         SELECT DISTINCT HASH_VALUE, address 
           FROM v$sql_plan  
           WHERE DISTRIBUTION IS NOT NULL ) 
      AND disk_reads > 100
      AND executions > 0
     ORDER BY ELAPSED_TIME DESC) 
  WHERE ROWNUM <=30;

Which segments have top Logical I/O & Physical I/O

Which segments have top Logical I/O & Physical I/O
                                                

Summary
Do you know which segments in your Oracle Database have the largest amount of I/O, physical and logical? This SQL helps to find out which segments are heavily accessed and helps to target tuning efforts on these segments:
SELECT ROWNUM AS Rank, Seg_Lio.* FROM 
         (SELECT St.Owner, St.Obj#, St.Object_Type, St.Object_Name, St.VALUE, 'LIO' AS Unit
          FROM V$segment_Statistics St
         WHERE St.Statistic_Name = 'logical reads'
         ORDER BY St.VALUE DESC) Seg_Lio
 WHERE ROWNUM <= 10
UNION ALL
SELECT ROWNUM AS Rank, Seq_Pio_r.* FROM 
         (SELECT St.Owner, St.Obj#, St.Object_Type, St.Object_Name, St.VALUE, 'PIO Reads' AS Unit
          FROM V$segment_Statistics St
         WHERE St.Statistic_Name = 'physical reads'
         ORDER BY St.VALUE DESC) Seq_Pio_r
 WHERE ROWNUM <= 10
UNION ALL
SELECT ROWNUM AS Rank, Seq_Pio_w.* FROM 
         (SELECT St.Owner, St.Obj#, St.Object_Type, St.Object_Name, St.VALUE, 'PIO Writes' AS Unit
          FROM V$segment_Statistics St
         WHERE St.Statistic_Name = 'physical writes'
         ORDER BY St.VALUE DESC) Seq_Pio_w
 WHERE ROWNUM <= 10;

Tablespace Disk I/O

Tablespace Disk I/O
                                               

Summary
The Physical design of the database reassures optimal performance for DISK I/O. Storing the datafiles in different filesystems (Disks) is a good technique to minimize disk contention for I/O.

How I/O is spread per Tablespace
SELECT T.NAME, SUM(Physical_READS) Physical_READS, 
ROUND((RATIO_TO_REPORT(SUM(Physical_READS)) OVER ())*100, 2) || '%' PERC_READS, 
SUM(Physical_WRITES) Physical_WRITES, 
ROUND((RATIO_TO_REPORT(SUM(Physical_WRITES)) OVER ())*100, 2) || '%' PERC_WRITES, 
SUM(total) total, ROUND((RATIO_TO_REPORT(SUM(total)) OVER ())*100, 2) || '%' PERC_TOTAL 
FROM (SELECT ts#, NAME, phyrds Physical_READS, phywrts Physical_WRITES,
phyrds + phywrts total
FROM v$datafile df, v$filestat fs
WHERE df.FILE# = fs.FILE#
ORDER BY phyrds DESC ) A, sys.ts$ T
WHERE A.ts# = T.ts#
GROUP BY T.NAME 
ORDER BY Physical_READS DESC;

Datafiles Disk I/O

Datafiles Disk I/O
                                            

Summary
The Physical design of the database reassures optimal performance for DISK I/O. Storing the datafiles in different filesystems (Disks) is a good technique to minimize disk contention for I/O.

How I/O is spread per datafile
SELECT NAME, phyrds Physical_READS, ROUND((RATIO_TO_REPORT(phyrds) OVER ())*100, 2)|| '%' PERC_READS, 
phywrts Physical_WRITES, ROUND((RATIO_TO_REPORT(phywrts) OVER ())*100, 2)|| '%' PERC_WRITES,
phyrds + phywrts total
FROM v$datafile df, v$filestat fs
WHERE df.FILE# = fs.FILE#
ORDER BY phyrds DESC;
Tip: ORDER BY phyrds, order by physical reads descending. ORDER BY phywrts, order by physical writes descending.

How I/O is spread per filesystem
SELECT filesystem, ROUND((RATIO_TO_REPORT(READS) OVER ())*100, 2) || '%' PERC_READS, 
ROUND((RATIO_TO_REPORT(WRITES) OVER ())*100, 2) || '%' PERC_WRITES, 
ROUND((RATIO_TO_REPORT(TOTAL) OVER ())*100, 2) || '%' PERC_TOTAL 
FROM (SELECT filesystem, SUM(Physical_READS) READS, SUM(Physical_WRITES) WRITES, SUM(total) TOTAL 
FROM (SELECT SUBSTR(NAME, 0, 25) filesystem, phyrds Physical_READS, 
ROUND((RATIO_TO_REPORT(phyrds) OVER ())*100, 2)|| '%' PERC_READS, 
phywrts Physical_WRITES, ROUND((RATIO_TO_REPORT(phywrts) OVER ())*100, 2)|| '%' PERC_WRITES,
phyrds + phywrts total
FROM v$datafile df, v$filestat fs
WHERE df.FILE# = fs.FILE#
ORDER BY phyrds DESC) A 
GROUP BY filesystem) B 
ORDER BY ROUND((RATIO_TO_REPORT(total) OVER ())*100, 2) DESC;
Tip: To see the filesystems correct experiment with the SUBSTR(NAME, 0, 25)

How I/O is spread for the datafiles of a specific tablespace
SELECT df.NAME, phyrds Physical_READS, ROUND((RATIO_TO_REPORT(phyrds) OVER ())*100, 2)|| '%' PERC_READS, 
phywrts Physical_WRITES, ROUND((RATIO_TO_REPORT(phywrts) OVER ())*100, 2)|| '%' PERC_WRITES,
phyrds + phywrts total
FROM v$datafile df, v$filestat fs, ts$ t
WHERE df.FILE# = fs.FILE# 
AND df.ts# = t.ts# 
AND t.NAME = 'TABLESPACE_NAME'
ORDER BY phyrds DESC;

Current waiting events Summary

The first and most important script about OWI, is where current sessions waiting
SELECT 
a.SID, b.serial#, b.status, p.spid, b.logon_time, a.event, l.NAME latch_name, a.SECONDS_IN_WAIT SEC, 
b.sql_hash_value, b.osuser, b.username, b.module, b.action, b.program, 
a.p1,a.p1raw,  a.p2, a.p3,   --, b.row_wait_obj#, b.row_wait_file#, b.row_wait_block#, b.row_wait_row#, 
'alter system kill session ' || '''' || a.SID || ', '|| b.serial# || '''' || ' immediate;' kill_session_sql 
FROM v$session_wait a, v$session b, v$latchname l, v$process p 
WHERE a.SID = b.SID 
AND b.username IS NOT NULL 
AND b.TYPE <> 'BACKGROUND' 
AND a.event NOT IN (SELECT NAME FROM v$event_name WHERE wait_class = 'Idle') 
AND (l.latch#(+) = a.p2) 
AND b.paddr = p.addr 
--AND a.sid = 559 
--AND module IN ('JDBC Thin Client') 
--AND p.spid = 13317
--AND b.sql_hash_value = '4119097924'
--AND event like 'library cache pin%' 
--AND b.osuser = 'oracle' 
--AND b.username = 'APPS' 
ORDER BY a.SECONDS_IN_WAIT DESC;
Some useful explanations

sid, serial#, status, logon_time, osuser, username, module, action, program: comes from v$session information
kill_session_sql: is the sql statement for killing the session
spid: is the unix process id, in case you want to $>kill -9 spid
sql_hash_value: is the SQL_ADDRESS to identify the SQL statement that is currently being executed. You'll need it for explain plans, etc.
event, latch_name, sec: wait events and how much time is waiting!
p1, p1raw, p2, p3: arguments to find the object related to waiting

Db file sequential read

Db file sequential read
                                                

Summary
The db file sequential read wait event means that Oracle is waiting while doing a single-block I/O read. This is the case when reading an index.

Like all wait events the columns P1, P2, P3 give us the information needed to diagnose the waiting.

Tip: A db sequential read is usually a single-block read, although it is possible to see sequential reads for more than one block (See P3 which is the number of blocks read).
This wait may also be seen for reads from datafile headers (P2=1 indicates a file header read) .
 Parameters: 
    P1 = file#
    P2 = block#
    P3 = blocks

 file#   This is the file# of the file that Oracle is trying to read 
                from. In Oracle8/9 it is the ABSOLUTE file number.

        block#  This is the starting block number in the file from where 
                Oracle starts reading the blocks. Typically only one block is
  being read.

                To find the object that Oracle doing the I/O use one of       
                the two following ways
                 
                SELECT owner, segment_type, segment_name, partition_name,      
                tablespace_name 
         FROM dba_extents 
         WHERE :P2 BETWEEN block_id AND (block_id + blocks - 1) 
         AND file_id = :P1;

                Or even better
                SELECT a.SID, c.obj, c.FILE#, c.dbablk 
         FROM v$session_wait a, x$bh c 
         WHERE a.p1 = c.FILE#(+)
         AND a.p2 = c.dbablk(+) 
         AND a.event = 'db file sequential read'
         AND a.SID = :sid_waiting; 

        blocks  This parameter specifies the number of blocks that Oracle is 
                trying to read from the file# starting at block#. This is
  usually "1" but if P3 > 1 then this is a multiblock read.
  Multiblock "db file sequential read"s may be seen in 
  earlier Oracle versions when reading from a SORT (TEMPORARY) 
         segments. 

 Wait Time: 
    The IO is generally issued as a single IO request to the OS - the wait 
    blocks until the IO request completes.
    Note than an Oracle read request to the OS may be satisfied from an
    OS file system cache so the wait time may be very small.
Some advise
If you see this wait event then general you are in a good position. All the databases (especially the very big systems) have some wait events and doing IO for an index scan is usual.

But SEC (seconds in waiting) from this query must be 0 or close to zero (1,2,3,4). If you see time waiting to increasing then you must tune the index I/O

How to improve performance for db file sequential read
The steps, starting first from that might have better results, are:

1. Rebuild the index (Fast index rebuild)
***To eliminate this case you have to examine the state of the index. 
If it is "compact" with no empty holes then there is no need to rebuild it.
 Find index skewed, rebuild
 Find indexes browning, rebuild

2. Tune SQL to use a better index
3. Distribute the index in different filesystems to reduce contention for I/O (Disk I/O)
4. Increase the db_cache_size

Embercadero has really nice explanation on this,try below site:

https://sites.google.com/site/embtdbo/wait-event-documentation/oracle-io-waits

Friday, August 23, 2013

DELETE or CLEANUP failed export Jobs


 Failed Jobs can be verified by using the following sql:

sql > select owner_name,job_name,operation,job_mode,state,attached_sessions from dba_datapump_jobs;

 





Check the state (status ) of the Jobs all the failed Jobs,they will show as not running for failed once.


DROP MASTER TABLE


Since  the  above  jobs  are  orphaned or  not running  won't  be  restarted  anymore,  so  drop  the master table. 
The master  tables  above  are  (SYS_EXPORT_FULL_01,   SYS_EXPORT_FULL_02, SYS_EXPORT_FULL_01) .
SQL*Plus: Release 10.2.0.4.0 - Production on Fri Aug 23 10:14:16 2013

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> drop table SYS_EXPORT_FULL_01;

Table dropped.

SQL> drop table SYS_EXPORT_FULL_02;

Table dropped.


Dropping these tables will have no impact on you new jobs are any database objects.
Now lets check if the Jobs are still showing with the failed status.

Wednesday, August 21, 2013

Extract all tablespaces DDL

Nice and easiest way to to extract the DDL for all tablepaces..

 Generate the DDL using the below query and you can re create the table spaces in any environment with any changes you want to make.

set heading off;
set echo off;
Set pages 999;
set long 90000;
spool ddl_list.sql

select dbms_metadata.get_ddl('TABLESPACE',tb.tablespace_name) from dba_tablespaces tb;

spool off

(sample) output will look something like this:

"
  CREATE TABLESPACE "SYSTEM" DATAFILE
  '/u02/oracle/oradata/system01.dbf' SIZE 314572800
  AUTOEXTEND ON NEXT 10485760 MAXSIZE 32767M
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT MANUAL
   ALTER DATABASE DATAFILE
  '/u02/oracle/oradata/QPUAT/system01.dbf' RESIZE 4194304000
 "
"
  CREATE UNDO TABLESPACE "UNDOTBS1" DATAFILE
  '/u02/oracle/oradata/undotbs01.dbf' SIZE 209715200
  AUTOEXTEND ON NEXT 5242880 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE
   ALTER DATABASE DATAFILE
  '/u02/oracle/oradata/QPUAT/undotbs01.dbf' RESIZE 1782579200
 "

Upgrading Opatch from 10.2.0.4.2 to 10.2.0.5.1 Version

Upgrading Opatch to Latest Version :

It is so simple to upgrade th Opatch version for your database.

I will check my current version of Opatch
[oracle@linux5 OPatch]$ opatch version
Invoking OPatch 10.2.0.4.2

OPatch Version: 10.2.0.4.2

OPatch succeeded.

Now download the latest Opatch version from oracle support site.
in this case I have downloaded p6880880_101000 & moved it to the database server
Move the patch file to your oracle home

[oracle@linux5 OPatch] mv p6880880_101000_SOLARIS64.zip $ORACLE_HOME
[oracle@linux5 OPatch] cd $ORACLE_HOME
(If a former OPatch directory already exist, then rename it)

Make sure you backup your current Opatch home
                                                                     
[oracle@linux5 OPatch] mv Opatch OPatch_100057_OLD    

Now unzip the patch file
[oracle@linux5 OPatch] unzip p6880880_101000_SOLARIS64.zip
Archive:  p6880880_101000_SOLARIS64.zip
   creating: OPatch/
   creating: OPatch/docs/
  inflating: OPatch/docs/FAQ      
  inflating: OPatch/docs/README.txt
  inflating: OPatch/docs/Users_Guide.txt
  inflating: OPatch/docs/bt1.txt  
  inflating: OPatch/docs/bt2.txt  
  inflating: OPatch/docs/tmp      
  inflating: OPatch/emdpatch.pl   
   creating: OPatch/jlib/
  inflating: OPatch/jlib/opatch.jar
  inflating: OPatch/opatch        
  inflating: OPatch/opatch.bat    
  inflating: OPatch/opatch.pl     
   creating: OPatch/perl_modules/
  inflating: OPatch/perl_modules/Apply.pm
  inflating: OPatch/perl_modules/AttachHome.pm
  inflating: OPatch/perl_modules/Command.pm
  inflating: OPatch/perl_modules/LsInventory.pm
  inflating: OPatch/perl_modules/Query.pm
  inflating: OPatch/perl_modules/RollBack.pm
  inflating: OPatch/perl_modules/Version.pm
  inflating: OPatch/perl_modules/XML.pm
  inflating: OPatch/perl_modules/opatchIO.pm
  inflating: OPatch/README.txt    
   creating: OPatch/ocm/
 extracting: OPatch/ocm/ocm.zip   
   creating: OPatch/ocm/doc/
  inflating: OPatch/ocm/doc/license.txt
   creating: OPatch/ocm/lib/
  inflating: OPatch/ocm/lib/emocmutl.jar
   creating: OPatch/ocm/bin/
  inflating: OPatch/ocm/bin/emocmrsp




thats it your patching is done.

check the Optach version :

[oracle@linux5 bin]$ cd $ORACLE_HOME/OPatch

export the path for Opatch
[oracle@linux5 OPatch]$ export PATH=$PATH:$ORACLE_HOME/OPatch

[oracle@linux5 OPatch]$ opatch version
Invoking OPatch 10.2.0.5.1

OPatch Version: 10.2.0.5.1

OPatch succeeded.

Monday, August 19, 2013

Why is Swapping Bad for oracle?


Linux OS is a virtual memory system like any other modern operating system. The Virtual Memory Management system of Linux includes:
  • Paging
  • Swapping
  • HugePages
  • Slab allocator
  • Shared memory
When almost all of the available physical memory (RAM) is started to be used in Linux, the kernel will start to swap out pages to the swap (disk space), or worse it may start swapping out entire processes. One another scenario is that it starts killing processes using the Out-of-Memory (OOM) Killer.

Swap Usage on Linux

To check swap usage on Linux you can use one of below:
  • free: Seek for low (or zero) values for Swap / used:
# free -m
             total       used       free     shared    buffers     cached
Mem:          4018       3144        873          0         66       2335
-/+ buffers/cache:        742       3276
Swap:         4690          0       4690
  • meminfo: Seek for SwapTotal = SwapFree
# grep Swap /proc/meminfo
SwapCached:            0 kB
SwapTotal:       4803392 kB
SwapFree:        4803380 kB
  • top: Look for low (preferably zero) values of Swap / used:
# top

...
Mem:   4115320k total,  3219408k used,   895912k free,    68260k buffers
Swap:  4803392k total,       12k used,  4803380k free,  2390804k cached
...
  • vmstat: Look for si / so values to be zero:
# vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  0     12 871592  69308 2405188    0    0   103    36  275  500 14 13 71  1

Why is Swapping Bad?

Especially on Linux we try to avoid swapping because:
  • The swapping potentially makes every memory page access a thousand (or more) times slower (and Linux swapping mechanism is not specifically fast).
  • As more memory swapped, more operations take longer time
  • As operations take longer time, more requests come in to be served
  • The demand for resources exponentially increase
Due to scenario above, if you have a memory bound application running (like a database), if once we start swapping, most of the time there is no recovering back.

The Oracle Database SGA pages are pageable on Linux by default, and potentially those pages can be swapped out if system runs out of memory. Using HugePages  is one of the methods to make the Oracle SGA not to be swapped out at all, still you need to be careful about the configuration. To learn all about HugePages please read Document 361323.1 and references.

Conclusions

  • Make sure your total SGA, PGA fit in your RAM also leaving some decent memory for process spaces and system services. See the database installation guides for more information
  • Consider using HugePages on Linux
  • Be very careful with your memory configuration (HugePages, Automatic Memory Management, Swap, VLM)
  • Monitor your OS continuously for memory usage and swapping

Monday, August 12, 2013

script to get all table and system privileges granted to a user

Description

This script all table and system privileges granted to a user. 
The scripts also takes into acount those privileges assigned via 
roles granted via roles

Parameters

username - The username to display privileges for

SQL Source


REM Copyright (C) Think Forward.com 1998- 2005. All rights reserved. 

set echo off
set verify off
set pages 200
col granted_role form a20
col owner form a10
col table_name form a27
col privilege form a12

ACCEPT username  prompt 'Enter Username : '

spool privs.lst

PROMPT Roles granted to user

SELECT granted_role,admin_option,default_role 
FROM dba_role_privs
WHERE grantee=UPPER('&username');
PROMPT Table Privileges granted to a user through roles
SELECT granted_role, owner, table_name, privilege 
FROM ( SELECT granted_role 
  FROM dba_role_privs WHERE grantee=UPPER('&username')
       UNION
       SELECT granted_role 
  FROM role_role_privs
  WHERE role in (SELECT granted_role 
       FROM dba_role_privs WHERE grantee=UPPER('&username')
      )
 ) roles, dba_tab_privs
WHERE granted_role=grantee;

PROMPT System Privileges assigned to a user through roles

SELECT granted_role, privilege
FROM ( SELECT granted_role 
  FROM dba_role_privs WHERE grantee=UPPER('&username')
       UNION
       SELECT granted_role 
  FROM role_role_privs
  WHERE role in (SELECT granted_role 
       FROM dba_role_privs WHERE grantee=UPPER('&username')
      )
 ) roles, dba_sys_privs
WHERE granted_role=grantee;

PROMPT Table privileges assigned directly to a user

SELECT owner, table_name, privilege 
FROM dba_tab_privs
WHERE grantee=UPPER('&username');

PROMPT System privileges assigned directly to a user

SELECT privilege, admin_option
FROM  dba_sys_privs
WHERE grantee=UPPER('&username');
spool off

Friday, August 9, 2013

Oracle Password Verification function

Password Verification : Enforcing Password Complexity

Password complexity is enforced using a verification function. Oracle provide an example password verification function in the "$ORACLE_HOME/rdbms/admin/utlpwdmg.sql" file, but you can code a custom one if you prefer. The function must accept three parameters (username, password and old_password) and return a boolean value, where the value TRUE signifies the password is valid. The simple example below just forces the password to be at least 8 characters long.

CREATE OR REPLACE FUNCTION my_verification_function (
  username      VARCHAR2,
  password      VARCHAR2,
  old_password  VARCHAR2)
  RETURN BOOLEAN AS
BEGIN
  IF LENGTH(password) < 8 THEN
    RETURN FALSE;
  ELSE
    RETURN TRUE;
  END IF;
END my_verification_function;
/

In the above we have restricted that password must contain greater that 8 characters.

Once the function is compiled under the SYS schema it can be referenced by the PASSWORD_VERIFY_FUNCTION parameter of a profile.

You can enable Password verify Function as below:
 
ALTER PROFILE my_profile LIMIT
  PASSWORD_VERIFY_FUNCTION my_verification_function;
The code below assigns the completed profile to a user and tests it.
SQL> ALTER USER scott PROFILE my_profile;

User altered.

SQL> ALTER USER scott IDENTIFIED BY small;
ALTER USER scott IDENTIFIED BY small
*
ERROR at line 1:
ORA-28003: password verification for the specified password failed
ORA-28003: password verification for the specified password failed


SQL> ALTER USER scott IDENTIFIED BY much_bigger;

User altered.

SQL>
A more complete example of a password verification function is provided by the "$ORACLE_HOME/rdbms/admin/utlpwdmg.sql" script.

If you have trouble thinking of strong passwords, try using a password generator like RandPass.com. It creates random strong passwords with a phonetic sound, making them easier to remember.
In Oracle 11g the "$ORACLE_HOME/rdbms/admin/utlpwdmg.sql" script has been updated to improve the default password verification function. It also applies the function to the DEFAULT profile, enabling it for all users.


FLASHBACK STANDBY DATABASE

FLASHBACK STANDBY

SQLPLUS commands to turn off DATA GUARD temporarily in order to activate the physical standby for testing purposes.  Notice commands are indicated by PRMY or STBY designation for which database they need to executed on.  11g reduces these steps making this a feature they call the “SNAPSHOT DATABASE”.

PRMY> ALTER SYSTEM ARCHIVE LOG CURRENT;

PRMY> ALTER SYSTEM SET DG_BROKER_START=FALSE;

PRMY> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER;

REDO APPLY is no longer running, Broker is automatically re-enabled if not turned off. Logs aren’t shipping. All of these things change in 11g!

STBY> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

STBY> CREATE RESTORE POINT HOTFIX1 GUARANTEE FLASHBACK DATABASE;

STBY> ALTER DATABASE ACTIVATE STANDBY DATABASE;

NOW you can use the physical standby safely for testing purposes, it is a fully functional database with data the same as production as of a certain point in time, no new data will be added…remember your Recovery Time Objective.

When testing is over, Revert….back into a physical standby.

STBY> STARTUP MOUNT FORCE;

STBY> FLASHBACK DATABASE TO RESTORE POINT HOTFIX1;

STBY> ALTER DATABASE CONVERT TO PHYSICAL STANDBY;

STBY> STARTUP MOUNT FORCE;

STBY> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

Some Very usefull Oracle Views


Oracle Views


  dba_cons_columns

dba_constraints
dba_data_files
dba_ddl_locks

dba_errors
dba_extents
dba_free_space

dba_indexes
dba_jobs
dba_lock

dba_objects
dba_role_privs
dba_roles

dba_rollback_segs
dba_segments
dba_source

dba_tab_columns
dba_tables
dba_temp_files

dba_triggers
dba_ts_quotas
dba_undo_extents

dba_users
dba_mview_logs
dba_mview_refresh_times

dba_mviews
v$archived_log
v$asm_disk

v$backup_datafile
v$controlfile
v$database

v$datafile
v$db_cache_advice
v$filestat

v$instance
v$lock
v$locked_object

v$log
v$logfile
v$parameter

v$process
v$segment_statistics
v$session

v$session_longops
v$session_wait
v$sesstat

v$sort_usage
v$sqlarea
v$statname

v$sysstat
v$system_event
v$tempfile

v$transaction
dba_part_tables
dba_queues

dba_resumable
dba_tab_partitions





Rman backup status

--------------------------------- Check if rman is still runnning ------------------------------

select
  sid,
  start_time,
  totalwork
  sofar,
 (sofar/totalwork) * 100 pct_done
from
   v$session_longops
where
   totalwork > sofar
AND
   opname NOT LIKE '%aggregate%'
AND
   opname like 'RMAN%';

select
   sid,
   spid,
   client_info,
   event,
   seconds_in_wait,
   p1, p2, p3
 from
   v$process p,
   v$session s
 where
   p.addr = s.paddr
 and
   client_info like 'rman channel=%';


Yousef Rifai has published this RMAN monitoring script, quite handy when you need to monitor the status of a long running RMAN backup job:


---------------    Gives percentage of job completed    --------------------------

REM RMAN Progress
alter session set nls_date_format='dd/mm/yy hh24:mi:ss'
/
select SID, START_TIME,TOTALWORK, sofar, (sofar/totalwork) * 100 done,
sysdate + TIME_REMAINING/3600/24 end_at
from v$session_longops
where totalwork > sofar
AND opname NOT LIKE '%aggregate%'
AND opname like 'RMAN%'
/


REM RMAN wiats
set lines 120
column sid format 9999
column spid format 99999
column client_info format a25
column event format a30
column secs format 9999
SELECT SID, SPID, CLIENT_INFO, event, seconds_in_wait secs, p1, p2, p3
  FROM V$PROCESS p, V$SESSION s
  WHERE p.ADDR = s.PADDR
  and CLIENT_INFO like 'rman channel=%'

using sqlloader to insert data in database table

Oracle SQL Loader – Importing CSV files to a table

This post shows how to easily insert data from a CSV file into a database table, using the Oracle SQL Loader tool. We will only concentrate on importing CSV formatted data to the Oracle table with a 1-to-1 column mapping, otherwise I will need to write a book. Users can easily reuse my code, and just customise the columns to their needs.

Source files: import.ctl import.csv
Step 1

Using the command line/prompt, use SQL Plus to login to your Oracle database. Make sure the user has permission to create tables.

$ sqlplus user/password@DBINSTANCE;
Step 2

Under this user, create a table where you want to import the data (skip this step if the table already exists).
view plaincopy to clipboardprint?

    SQL> create table T_IMPORTED_DATA 
    2  ( 
    3    SERIAL       NUMBER(16), 
    4    TITLE        VARCHAR2(64), 
    5    PUBLISH_DATE DATE, 
    6    DESCRIPTION  VARCHAR2(64) 
    7  ) 
    8  / 
    Table created 

Step 3

We use an Oracle utility called SQL Loader. The utility is a command line based, and should work if SQL Plus works (as above). It accepts the 2 source files: the data CSV file, and the CTL control file which specifies how to process our data file. The only thing you need to do is customize the last few rows of the control file to specify how to process each column on the CSV. Each row has the following format:

<column_name> <data_type> <NULLIF column_name = BLANKS>

<column_name> specifies the table column to put the data
<data_type> specifies data type conversion when parsing the CSV data
<NULLIF column_name = BLANKS> is always added to handle empty CSV data, NULL will be inserted

Note that each successive row in CTL file represents each column to be parsed on the CSV, in order.
Step 4

To begin the import, put all the files into one directory, and run the sqlldr command, under the user which owns the table created above.

$ sqlldr userid=user/password@DB_INSTANCE_NAME control=import.ctl log=import.log data=import.csv
Verify

Log files will be created, import.log and import.bad. Verify these to make sure any rows were skipped. Otherwise the data should be added into the database table.

Finding Last DDL and DML Activity on a Table

 

 Finding out Last DDL and DML Activity on a Table


Here is a small piece of SQL Commands to get the same info:

create table t (t1 number,t2 varchar2(20),t3 date);

Table created.

Here is how you could find the same

OracleDba.In >select
  2  (select last_ddl_time from dba_objects where object_name='T' and owner='SYS') "DDL Time",
  3   decode(maxscn,0,'N/A',scn_to_timestamp(maxscn)) "DML Time"
  4  from
  5  (select nvl(max(ora_rowscn),0) maxscn from t);

DDL Time            DML Time
------------------- -------------------------------
2012-01-25 15:58:35 N/A

Now add some data to the table

sql >insert into t values(1,'A',sysdate);

1 row created.

sql  >select
          (select last_ddl_time from dba_objects where object_name='T' and owner='SYS') "DDL Time",
           decode(maxscn,0,'N/A',scn_to_timestamp(maxscn)) "DML Time"
           from
          (select nvl(max(ora_rowscn),0) maxscn from t);

DDL Time            DML Time
------------------- -------------------------------
2013-01-25 15:58:35 25-JAN-13 04.05.14.000000000 PM

sql  >commit;

Commit complete.

OraDba.In >update t set t1=2;

1 row updated.


sql  >select
          (select last_ddl_time from dba_objects where object_name='T' and owner='SYS') "DDL Time",
           decode(maxscn,0,'N/A',scn_to_timestamp(maxscn)) "DML Time"
           from
          (select nvl(max(ora_rowscn),0) maxscn from t);


DDL Time            DML Time
------------------- -------------------------------
2013-01-25 15:58:35 25-JAN-13 04.05.20.000000000 PM

OracleDba.In >alter table t move;

Table altered.


sql  >select
          (select last_ddl_time from dba_objects where object_name='T' and owner='SYS') "DDL Time",
           decode(maxscn,0,'N/A',scn_to_timestamp(maxscn)) "DML Time"
           from
          (select nvl(max(ora_rowscn),0) maxscn from t);



DDL Time            DML Time
------------------- -------------------------------
2013-01-25 16:05:40 25-JAN-13 04.05.38.000000000 PM


Note:  You can also get all this information by enabling auditing in your database.

Check this links for Auditing

http://arvindasdba.blogspot.com/2014/11/oracle-audting-expained.html

http://arvindasdba.blogspot.com/2014/03/auditing-ddl-changes-in-oracle-database.html

Configuration of HugePages for Oracle

Configuration of HugePages for Oracle database 10.2 on Red hat Linux 5.1

Environment: Oracle database 11.2.0.3 Enterprise Edition, Oracle Linux 6.4 64-bit
HugePages can give a performance boost on Linux systems with large SGA sizes. However, this is not set up by default, and the configuration can sometimes be a bit tricky.

This is how I did it on a system with a 4GB SGA size:

1) Disable Automatic Memory Management (AMM)

AMM is not compatible with HugePages. I disabled it by setting the following memory-related initialization parameters:
ALTER SYSTEM SET memory_max_target=0 SCOPE=SPFILE;
ALTER SYSTEM SET memory_target=0 SCOPE=BOTH;
ALTER SYSTEM SET sga_max_size=4016M SCOPE=SPFILE;
ALTER SYSTEM SET sga_target=4016M SCOPE=BOTH;
+ restart the instance

2) Calculate the number of HugePages needed
The size of one HugePage can be found as follows:
$ cat /proc/meminfo|grep Hugepagesize
Hugepagesize: 2048 kB

The amount of HugePages that you need can be found with the following script:
#!/bin/bash
#
# hugepages_settings.sh
#
# Linux bash script to compute values for the
# recommended HugePages/HugeTLB configuration
#
# Note: This script does calculation for all shared memory
# segments available when the script is run, no matter it
# is an Oracle RDBMS shared memory segment or not.
# Check for the kernel version
KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'`
# Find out the HugePage size
HPG_SZ=`grep Hugepagesize /proc/meminfo | awk {'print $2'}`
# Start from 1 pages to be on the safe side and guarantee 1 free HugePage
NUM_PG=1
# Cumulative number of pages required to handle the running shared memory segments
for SEG_BYTES in `ipcs -m | awk {'print $5'} | grep "[0-9][0-9]*"`
do
   MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q`
   if [ $MIN_PG -gt 0 ]; then
      NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q`
   fi
done
# Finish with results
case $KERN in
   '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`;
          echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;;
   '2.6') echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;;
    *) echo "Unrecognized kernel version $KERN. Exiting." ;;
esac
# End
$ ./hugepages_settings.sh

Recommended setting: vm.nr_hugepages = 2013

3) Set the number of HugePages in /etc/sysctl.conf
Add the following to /etc/sysctl.conf:
vm.nr_hugepages = 2013

4) Set the memory limits for user “oracle”
The memory limits for user “oracle” can be calculated by multiplying the number of HugePages with the Hugepagesize (so, in my case, 2013*2048 = 4122624).
Add this number to /etc/security/limits.conf:
oracle soft memlock 4122624
oracle hard memlock 4122624

5) Set USE_LARGE_PAGES
By setting the initialization parameter USE_LARGE_PAGES, you can force the instance to only start up when enough HugePages are available.
ALTER SYSTEM SET USE_LARGE_PAGES=only SCOPE=SPFILE;

6) Reboot the system
Now reboot the server. You can check if large pages are being used in 2 ways:
$ cat /proc/meminfo|grep HugePages
AnonHugePages: 237568 kB
HugePages_Total: 2013
HugePages_Free: 1532
HugePages_Rsvd: 1528
HugePages_Surp: 0
Or check the alert.log file for the database instance:

Starting ORACLE instance (normal)
****************** Large Pages Information *****************
Parameter use_large_pages = ONLY
Total Shared Global Region in Large Pages = 4018 MB (100%)
Large Pages used by this instance: 2009 (4018 MB)
Large Pages unused system wide = 4 (8192 KB) (alloc incr 16 MB)
Large Pages configured system wide = 2013 (4026 MB)
Large Page size = 2048 KB
***********************************************************

Wednesday, August 7, 2013

Export/Zip and Unzip/Import using UNIX Pipes

Export/Zip and Unzip/Import using UNIX Pipes

If you are creating and using large dump (.dmp) files with Oracle's export (exp) and import (imp) utilities, you can compress and uncompress the dump file as needed using gzip and gunzip (from GNU) or the unix compress and uncompress utilities.  One problem with this is that you will still have to have the large dump file on your disk at some point, which you may or may not have room for, and you may run up against the 2 Gig file size limit on some machines and Oracle versions.  Instead of doing the export and zip separately, creating an interim dump file (or doing unzip and import), unix has the ability to pipe the output from one program (such as exp) as input to another program (such as gzip) as they both run in parallel, without having to run the programs sequentially and without having to create interim files.
To do this, you would create a pipe (using the unix mknod command), run the programs (in either order), with the first one in the background (using "&" at the end), which use that pipe like a file, and then remove the pipe.   Below shows a full export to a zipped dump file:


Below shows a full export with that zipped dump file:

mknod /u02/backup/dpump/exp_pipe p
gzip -cNf </u02/backup/dpump/exp_pipe>prod.dmp.gz &
exp system/oracle file=/u02/backup/dpump/exp_pipe full=y compress=n log=prod.dmp.log
rm -f /u02/backup/dpump/exp_pipe


Below shows a full import with that zipped dump file:

    mknod /u02/backup/dpump/imp_pipe p
    gunzip -c prod.dmp.gz >/u02/backup/dpump/imp_pipe &
    imp system/oracle file=/u02/backup/dpump/imp_pipe fromuser=scott touser=scott ignore=y buffer=1024000 commit=y log=prod_imp.log
    rm -f /u02/backup/dpump/imp_pipe