Tuesday, June 25, 2013

adding primary key to already existing table in oracle

lets assume that there was a table ABC that was already existing in the database and you want to add an additional column with unique primary key values.


sql to create table abc :

  CREATE TABLE "ABC"
   (    "USERNAME" VARCHAR2(30 BYTE) NOT NULL ENABLE,
    "USER_ID" NUMBER NOT NULL ENABLE,
    "CREATED" DATE NOT NULL ENABLE )
   TABLESPACE "QUIKPAY_USER" ;




now we  can add an additional column ID which will be populated with all unique values.

alter table abc add(ID NUMBER);

you can create a sequence and get the values from the seq and insert them into table ID column:

CREATE SEQUENCE SEQ_ID
START WITH 1
INCREMENT BY 1
MAXVALUE 999999
MINVALUE 1
NOCYCLE;


now insert the unique values into the database with below sql

UPDATE abc SET ID = SEQ_ID.NEXTVAL;


now you can make the column unique or add primary key to table,so that it wont take any more duplicate value into the table.

alter table abc add primarykey (ID);

Friday, June 21, 2013

Performance - AWR report





Performance - AWR

Display a list of snapshots
Produce a report
To see the snapshot interval and retention period
Change the snapshot interval
Change the retention period
Manually take a snapshot
List all baselines
Create a baseline
Remove a baseline
Enable/Disable automatic snapshots
Time model queries
Produce an Active Session History (ASH) report

Display a list of snapshots

 set lines 100 pages 999
select    snap_id
,               snap_level
,               to_char(begin_interval_time, 'dd/mm/yy hh24:mi:ss') begin
from      dba_hist_snapshot
order by 1
/


You can see what snapshots are currently in the AWR by using the DBA_HIST_SNAPSHOT view as seen in this example:

SELECT snap_id, to_char(begin_interval_time,'dd/MON/yy hh24:mi') Begin_Interval,
to_char(end_interval_time,'dd/MON/yy hh24:mi') End_Interval
FROM dba_hist_snapshot
ORDER BY 1;


Produce a report

 @?/rdbms/admin/awrrpt.sql

To see the snapshot interval and retention period

 col snap_interval format a30
col retention format a30
select    snap_interval
,               retention
from      dba_hist_wr_control
/

Change the snapshot interval

Note. This example changes it to 30 minutes
exec dbms_workload_repository.modify_snapshot_settings (interval => 30)

Change the retention period

Note. This example changes it to two weeks (14 days)
exec dbms_workload_repository.modify_snapshot_settings (retention => 14*24*60)

Manually take a snapshot

 exec dbms_workload_repository.create_snapshot

List all baselines

 set lines 100
col baseline_name format a40
select    baseline_id
,               baseline_name
,               start_snap_id
,               end_snap_id
from      dba_hist_baseline
order by 1
/

Create a baseline

 exec dbms_workload_repository.create_baseline (<start snap>, <endsnap>,'<name>')

Remove a baseline

 exec dbms_workload_repository.drop_baseline('<baseline name>')

Enable/Disable automatic snapshots

Note. This job is enabled by default
exec dbms_scheduler.enable('GATHER_STATS_JOB')

and to disable...
exec dbms_scheduler.disable('GATHER_STATS_JOB')

Time model queries

System time model
set lines 100 pages 999
select    stat_name
,               value
from      v$sys_time_model
order by value desc
/

Session time model
set lines 100 pages 999
select    stat_name
,               value
from      v$sess_time_model
where   sid = '&sid'
order by value desc
/

Produce an Active Session History (ASH) report

 @?/rdbms/admin/ashrpt.sql

datapump basic's



CONN / AS SYSDBA
ALTER USER scott IDENTIFIED BY tiger ACCOUNT UNLOCK;

CREATE OR REPLACE DIRECTORY test_dir AS '/u01/app/oracle/oradata/';
GRANT READ, WRITE ON DIRECTORY test_dir TO scott;

Note. Data Pump is a server-based technology, so it typically deals with directory objects pointing to physical directories on the database server. It does not write to the local file system on your client PC.


Table Exports/Imports

The TABLES parameter is used to specify the tables that are to be exported. The following is an example of the table export and import syntax.

expdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log

impdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log
For example output files see expdpEMP_DEPT.log and impdpEMP_DEPT.log.

The TABLE_EXISTS_ACTION=APPEND parameter allows data to be imported into existing tables.


Schema Exports/Imports

The OWNER parameter of exp has been replaced by the SCHEMAS parameter which is used to specify the schemas to be exported. The following is an example of the schema export and import syntax.

expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=impdpSCOTT.log
For example output files see expdpSCOTT.log and impdpSCOTT.log.


Database Exports/Imports

The FULL parameter indicates that a complete database export is required. The following is an example of the full database export and import syntax.

expdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=expdpDB10G.log

impdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=impdpDB10G.log




CONTENT parameter:

expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp CONTENT=METADATA_ONLY


The following example shows an export operation that is assigned a job name of exp_job:

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=exp_job.dmp JOB_NAME=exp_job
NOLOGFILE=y


The following is an example of using the PARALLEL parameter:

> expdp hr/hr DIRECTORY=dpump_dir1 LOGFILE=parallel_export.log
JOB_NAME=par4_job DUMPFILE=par_exp%u.dmp PARALLEL=4


datapump network link:

expdp hr/hr DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_link
  DUMPFILE=network_export.dmp LOGFILE=network_export.log


The contents of the emp_query.par file are as follows:

QUERY=employees:'"WHERE department_id > 10 AND salary > 10000"'
NOLOGFILE=y
DIRECTORY=dpump_dir1
DUMPFILE=exp1.dmp


The following is an example of using the SCHEMAS parameter. Note that user hr is allowed to specify more than one schema because the EXP_FULL_DATABASE role was previously assigned to it for the purpose of these examples.

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=hr,sh,oe



The following is an example of using the STATUS parameter.

> expdp hr/hr DIRECTORY=dpump_dir1 SCHEMAS=hr,sh STATUS=300

This example will export the hr and sh schemas and display the status of the export every 5 minutes (60 seconds x 5 = 300 seconds)



The following is an example of using the TABLESPACES parameter. The example assumes that tablespaces tbs_4, tbs_5, and tbs_6 already exist.

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tbs.dmp
TABLESPACES=tbs_4, tbs_5, tbs_6


The following is an example of using the TRANSPORT_FULL_CHECK parameter. It assumes that tablespace tbs_1 exists.

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp
TRANSPORT_TABLESPACES=tbs_1 TRANSPORT_FULL_CHECK=y LOGFILE=tts.log


The following is an example of using the TRANSPORT_TABLESPACES parameter in a file-based job (rather than network-based). The tablespace tbs_1 is the tablespace being moved. This example assumes that tablespace tbs_1 exists and that it has been set to read-only.

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp
TRANSPORT_TABLESPACES=tbs_1 TRANSPORT_FULL_CHECK=y LOGFILE=tts.log



Syntax and Description

CONTINUE_CLIENT

In logging mode, status is continually output to the terminal. If the job is currently stopped, then CONTINUE_CLIENT will also cause the client to attempt to start the job.

Example

Export> CONTINUE_CLIENT
EXIT_CLIENT

Purpose

Stops the export client session, exits Export, and discontinues logging to the terminal, but leaves the current job running.

Syntax and Description

EXIT_CLIENT

Because EXIT_CLIENT leaves the job running, you can attach to the job at a later time. To see the status of the job, you can monitor the log file for the job or you can query the USER_DATAPUMP_JOBS view or the V$SESSION_LONGOPS view.

Example

Export> EXIT_CLIENT



Example 2-1 Performing a Table-Mode Export

expdp hr/hr TABLES=employees,jobs DUMPFILE=dpump_dir1:table.dmp NOLOGFILE=y



Estimating Disk Space Needed in a Table-Mode Export

Example 2-3 shows the use of the ESTIMATE_ONLY parameter to estimate the space that would be consumed in a table-mode export, without actually performing the export operation. Issue the following command to use the BLOCKS method to estimate the number of bytes required to export the data in the following three tables located in the human resource (hr) schema: employees, departments, and locations.

Example 2-3 Estimating Disk Space Needed in a Table-Mode Export

> expdp hr/hr DIRECTORY=dpump_dir1 ESTIMATE_ONLY=y TABLES=employees,
departments, locations LOGFILE=estimate.log


Example 2-5 shows a full database Export that will have 3 parallel worker processes.

Example 2-5 Parallel Full Export

> expdp hr/hr FULL=y DUMPFILE=dpump_dir1:full1%U.dmp, dpump_dir2:full2%U.dmp
FILESIZE=2G PARALLEL=3 LOGFILE=dpump_dir1:expfull.log JOB_NAME=expfull

scheduling crontab jobs in Linux or Unix




1. Scheduling a Job For a Specific Time
The basic usage of cron is to execute a job in a specific time as shown below. This will execute the Full backup shell script (full-backup) on 10th June 08:30 AM.

Please note that the time field uses 24 hours format. So, for 8 AM use 8, and for 8 PM use 20.
30 08 10 06 * /home/ramesh/full-backup
  • 30 – 30th Minute
  • 08 – 08 AM
  • 10 – 10th Day
  • 06 – 6th Month (June)
  • * – Every day of the week
2. Schedule a Job For More Than One Instance (e.g. Twice a Day)
The following script take a incremental backup twice a day every day.

This example executes the specified incremental backup shell script (incremental-backup) at 11:00 and 16:00 on every day. The comma separated value in a field specifies that the command needs to be executed in all the mentioned time.
00 11,16 * * * /home/ramesh/bin/incremental-backup
  • 00 – 0th Minute (Top of the hour)
  • 11,16 – 11 AM and 4 PM
  • * – Every day
  • * – Every month
  • * – Every day of the week
3. Schedule a Job for Specific Range of Time (e.g. Only on Weekdays)
If you wanted a job to be scheduled for every hour with in a specific range of time then use the following.
Cron Job everyday during working hours
This example checks the status of the database everyday (including weekends) during the working hours 9 a.m – 6 p.m
00 09-18 * * * /home/ramesh/bin/check-db-status
  • 00 – 0th Minute (Top of the hour)
  • 09-18 – 9 am, 10 am,11 am, 12 am, 1 pm, 2 pm, 3 pm, 4 pm, 5 pm, 6 pm
  • * – Every day
  • * – Every month
  • * – Every day of the week
Cron Job every weekday during working hours
This example checks the status of the database every weekday (i.e excluding Sat and Sun) during the working hours 9 a.m – 6 p.m.
00 09-18 * * 1-5 /home/ramesh/bin/check-db-status
  • 00 – 0th Minute (Top of the hour)
  • 09-18 – 9 am, 10 am,11 am, 12 am, 1 pm, 2 pm, 3 pm, 4 pm, 5 pm, 6 pm
  • * – Every day
  • * – Every month
  • 1-5 -Mon, Tue, Wed, Thu and Fri (Every Weekday)

some basic sql's for beginners in Oracle database



Oracle Database Commands and Queries:

1.To view all the table from dictionary :
SQL> select table_name from dictionary;

2.To identify the database name :
SQL> select name from v$database;

3.To identify the instance name :
SQL> select instance from v$thread;

4.To know the size of the database blocks
SQL> select value from v$parameter where name =’db_block_size’;

5.List the name of the data files :
SQL> select name from v$datafile;

6.Identify the datafile that makes up the system tablespace :
SQL> select file_name from dba_data_files where tablespace_name = ‘SYSTEM’;

7.To check how much free space is available in database and how much is used:
SQL>select sum(bytes)/1024 “free space in KB” from dba_free_space;
SQL>select sum(bytes)/1024 “used space in KB” from dba_segments”;

8.List the name and creation date of database users :
SQL>select username, created from dba_users;

9.Where is the existing Control file located and what is the name?
SQL> select * from v$controlfile;
(or)
SQL> show parameter control;
(or)
SQL> select name from v$controlfile;

10.What is the initial sizing of the datafile section in your control file?
SQL>select records_total from v$controlfile_record_sectionwhere type = “DATAFILE”;

11.List the number and location of existing log files?
SQL> select member from v$logfile;

12.Display the number of redo log file groups and members your database has ?
SQL>select group#, members, status from v$log;

13.In which database mode is your database configured?
SQL> select log_mode from v$database;

14.Is archiving enabled?
SQL>select archiver from v$instance;

15.To view all the tablespaces name?
SQL>select tablespace_name from dba_tablespaces;

16.Identify the different types of segments in the database.
SQL>select DISTINCT segment_type from dba_segments;

script to drop all objects in your schema


This script can be used to drop all the objects in your schema,very useful when you want a fresh schema and start from scratch.no need to delete objects individually or recreate user.

-----------------------------------------------------------------------------
-----    plsql script to drop all the objects in your
-----      current schema
--------------------------------------------------------------
declare
v_str1 varchar2(200) := null;
cursor get_sql is
select
'drop '||object_type||' '|| object_name|| DECODE(OBJECT_TYPE,'TABLE',' CASCADE CONSTRAINTS PURGE') v_str1
from user_objects
where object_type in ('TABLE','VIEW','PACKAGE','TYPE','PROCEDURE','FUNCTION','TRIGGER','SEQUENCE','SYNONYM')
order by object_type,object_name;
begin
open get_sql;
loop
fetch get_sql into v_str1;
if get_sql%notfound
then exit;
end if;
execute immediate v_str1;
end loop;
close get_sql;
end;
/

Some very usefull sql's for Datagaurd

Backup - DataGuard

Startup commands
To remove a delay from a standby
Cancel managed recovery
Register a missing log file
If FAL doesn't work and it says the log is already registered
Check which logs are missing
Disable/Enable archive log destinations
Turn on fal tracing on the primary db
Stop the Data Guard broker
Show the current instance role
Logical standby apply stop/start
See how up to date a physical standby is
Display info about all log destinations
Display log destinations options
List any standby redo logs


Startup commands

 startup nomount
alter database mount standby database;
alter database recover managed standby database disconnect;


To remove a delay from a standby

 alter database recover managed standby database cancel;
alter database recover managed standby database nodelay disconnect;

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
  DISCONNECT FROM SESSION;

Cancel managed recovery

 alter database recover managed standby database cancel;

Register a missing log file

 alter database register physical logfile '<fullpath/filename>';

If FAL doesn't work and it says the log is already registered

 alter database register or replace physical logfile '<fullpath/filename>';

If that doesn't work, try this...
shutdown immediate
startup nomount
alter database mount standby database;
alter database recover automatic standby database;

wait for the recovery to finish - then cancel
shutdown immediate
startup nomount
alter database mount standby database;
alter database recover managed standby database disconnect;



TO Check which archive logs are missing

Run this on the standby...

select local.thread#
,      local.sequence# from
       (select thread#
       ,       sequence#
       from    v$archived_log
       where dest_id=1) local
where  local.sequence# not in
       (select sequence#
       from v$archived_log
       where dest_id=2 and
       thread# = local.thread#)
/


To Disable/Enable archive log destinations 

 alter system set log_archive_dest_state_2 = 'defer';
defer =stop shipping the archive logs to target

alter system set log_archive_dest_state_2 = 'enable';
defer =start shipping the archive logs to target

Turn on fal tracing on the primary db

 alter system set LOG_ARCHIVE_TRACE = 128;

Stopping the Data Guard broker

 alter system set dg_broker_start=false
/

Check the current instance role in primary or standby

 select    database_role
from    v$database
/


Logical standby apply stop/start (if you are standby DB is a logical standby)

Stop...
alter database stop logical standby apply;

Start...
alter database start logical standby apply;



Check how upto date sync is in physical standby

Run this on the primary to check max applied sequence

set numwidth 15
select    max(sequence#) current_seq
from    v$log
/


Then run this on the standby to check max applied sequence

set numwidth 15
select    max(applied_seq#) last_seq
from    v$archive_dest_status
/


Display info about all log destinations

To be run on the primary
set lines 100
set numwidth 15
column ID format 99
column "SRLs" format 99
column active format 99
col type format a4

select    ds.dest_id id
,    ad.status
,    ds.database_mode db_mode
,    ad.archiver type
,    ds.recovery_mode
,    ds.protection_mode
,    ds.standby_logfile_count "SRLs"
,    ds.standby_logfile_active active
,    ds.archived_seq#
from    v$archive_dest_status    ds
,    v$archive_dest        ad
where    ds.dest_id = ad.dest_id
and    ad.status != 'INACTIVE'
order by
    ds.dest_id
/


Display log destinations options


To be run on the primary
set numwidth 8 lines 100
column id format 99
select    dest_id id
,    archiver
,    transmit_mode
,    affirm
,    async_blocks async
,    net_timeout net_time
,    delay_mins delay
,    reopen_secs reopen
,    register,binding
from    v$archive_dest
order by
    dest_id
/


List any standby redo logs

 set lines 100 pages 999
col member format a70
select    st.group#
,    st.sequence#
,    ceil(st.bytes / 1048576) mb
,    lf.member
from    v$standby_log    st
,    v$logfile    lf
where    st.group# = lf.group#
/

Wednesday, June 19, 2013

disable password verify function in oracle

Password verify function can be disabled by setting it to null.


To make the password more complex most DBA's set the complexity to  oracle provided package.As default password complexity provided in oracle script doesn't satisfy your organizations requirements.

SOLUTION:


alter profile default limit password_verify_function null;
To Enable it back :

Failed to shutdown DBConsole Gracefully

Environment:
Oracle:
Oracle 10g Release 2  10.2.0.5
OS:
Linux
Summary:
Fix Oracle EM problem:Failed to shutdown DBConsole Gracefully
Today on a Linux Oracle machine without Oracle Enterprise Manger, I would like to install Oracle EM.
Use command
emca -config dbcontrol db -repos create
It reports an error
- An instance of Oracle Enterprise Manager 10g Database Control is already running.
It is strange that I have never create or start an EM process.
So I try to stop the existing EM process
SQL> emctl stop dbconsole
Oracle Enterprise Manager 10g Database Control Release 10.2.0.5.0
Copyright (c) 1996, 2006 Oracle Corporation.  All rights reserved.
http://oracletest:1158/em/console/aboutApplication
Stopping Oracle Enterprise Manager 10g Database Control …
— Failed to shutdown DBConsole Gracefully —
failed.
It fails, and also fails when I want to start EM
SQL> emctl start dbconsole
Oracle Enterprise Manager 10g Database Control Release 10.2.0.5.0
Copyright (c) 1996, 2006 Oracle Corporation.  All rights reserved.
http://oracletest:1158/em/console/aboutApplication
– An instance of Oracle Enterprise Manager 10g Database Control is already running.
A metalink doc tells us, it may happens when /etc/hosts file does not have following rows
127.0.0.1    localhost.localdomain  localhost
The  metalink doc also tells us to check file emctl.pid
Under $ORACLE_HOME/$HOST_$ORACLE_SID, for example in my host under /oracle/app/oracle/product/10.2.0/oracletest_mes
/oracle/app/oracle/product/10.2.0/oracletest_mes> cat emctl.pid
36599
Use ps –ef to check 36599 process, it is really not a oracle EM dbconsole process,so that’s why emctl stop dbconsole can not stop its instance.
Delete emctl.pid, emctl stop dbconsole succeed.
But when recreate dbconsole, it fails again and show error that the port 3938 has been used.
Check $ORACLE_HOME/install/portlist.ini
Enterprise Manager Console HTTP Port (mes) = 1158
Enterprise Manager Agent Port (mes) = 3938
It tells us EM agent use 3938 port, that means EM has 2 processes, one is for dbconsole, and one is for emagent. So you know how to do next, kill emagent process and recreate dbconsole. And this time, everything is OK.