Monday, July 25, 2016

how to define RMAN retention policy time?

Many a times, I heard DBA’s complaining about RMAN retention policy through which they face some problems during restore or  recovery.
For example a DBA posted that his retention policy is set to recovery window of 90 days, but when tried to restore the database to 30 days back date, RMAN saying recovery till that time is not possible. DBA wondered why this cannot be done if he has retention of 90 days.
Here is the answer for the same.
we all know that RMAN will store backup metadata in control file’s reusable location if recovery catalog is not configured. Generally this information will be stored for 7 days and can be changed by modifying CONTROL_FILE_RECORD_KEEP_TIME parameter.
whenever we define retention policy of RMAN either to recovery window or redundancy to some X value, we need to remember that  RMAN still follow the value defined for CONTROL_FILE_RECOD_KEEP_TIME. That means, if recovery window is set to 30 days, but the above parameter is set to 7 days (default), you cannot recover beyond 7 days… which is very unfortunate…
Right ! so lets see what can be done for this?
we need to define CONTROL_FILE_RECORD_KEEP_TIME always a higher value than what we set for retention policy. As per the formula, it should be
CONTROL_FILE_RECORD_KEEP_TIME=retention policy value+ level 0 backup interval+ 1
let me throw some example for easy understanding…
If suppose your retention time is set to recovery window of 90 days and a weekly full level 0 backup is scheduled, then CONTROL_FILE_RECORD_KEEP_TIME = 90 (retention policy) + 7 (level 0 backup interval) + 1 = 98
Note: In worst case, atleast you should set that same (equal to) as retention policy value
What happens if I don’t set to higher value?
In such case, RMAN will overwrite the content of backup metadata prior to obsolete them by RMAN and even though physically backup pieces are existing, you cannot use them.

Rman Retention Policy Based On Redundancy Policy

I found many of us uses the Rman Recovery window Retention Policy . I have not find the rman Redundancy policy used . Here we will discuss the Disadvantages of the Redundancy Policy.   

The REDUNDANCY parameter of the CONFIGURE RETENTION POLICY command specifies how many backups of each datafile and control file that RMAN should keep. In other words, if the number of backups for a specific datafile or control file exceeds the REDUNDANCY setting, then RMAN considers the extra backups as obsolete.

Suppose we have a RMAN retention policy of "REDUNDANCY 2". This means that as long as we have at least two backups of the same datafile, controlfile/spfile or archivelog the other older backups become obsolete and RMAN is allowed to safely remove them.

Now, let's also suppose that every night we backup our database using the following script:
SQL> CONFIGURE CONTROLFILE AUTOBACKUP ON;
SQL> rman {
                        backup database plus archivelog;
                        delete noprompt obsolete redundancy 2;
                     }

The backup task is quite simple    :    first of all it ensures that we have the controlfile autobackup feature on, then it backups the database and archives and, at the end, it deletes all obsolete backups using the REDUNDANCY 2 retention policy.
Using the above approach we might think that we can restore our database as it was two days ago, right? For example, if we have a backup taken on Monday and another one taken on Tuesday we may restore our database as it was within the (Monday_last_backup - Today) time interval. Well, that's wrong!

Consider the following scenario :
1.)  On Monday night we backup the database using the above script;
2.) On Tuesday, during the day, we drop a tablespace. Because this is a structural database change a controlfile autobackup will be triggered. ie, we have a new controlfile backup.
3.) On Tuesday night we backup again the database... nothing unusual, right? 

Well, the tricky part is regarding the DELETE OBSOLETE command. When the backup script will run this command, RMAN finds out three controlfile backups: One is originating from the Monday backup, One is from the structural change and the third is from our just finished Tuesday backup database command. Now according to the retention policy of "REDUNDANCY 2", RMAN will assume that it is safe to delete the backup of the controlfile taken on Monday night backup because it's out of our retention policy and because this backup is the oldest one. ooops... this means that we gonna have a big problem restoring the database as it was before our structural change because we don't have a controlfile backup from that time.

So, if we intend to incomplete recover our database to a previous time in the past it's really a good idea to switch to a retention policy based on a "RECOVERY WINDOW" instead. In our case a RECOVERY WINDOW OF 2 DAYS would be more appropriate.

Thursday, July 21, 2016

Difference Between VIEW and Materialized View


View
Materialized view
1.  In Views query result is not stored in the disk or database.
1. Materialized view allows to store query result in disk or table.
2. When we create view using any table, rowid of view is same as original table.
2. In case of Materialized view rowid is different.
3.  In case of View we always get latest data.
3. Materialized view we need to refresh the view for getting latest data.
4. In case of view we don't need extra trigger or some automatic method to refresh the view.
4. In case of Materialized view we need extra trigger or some automatic method so that we can keep Materialized view refreshed.


Materialized View
A materialized view is a database object that contains the results of a query. 
The FROM clause of the query can name tables, views, and other materialized views. 
Collectively these objects are called master tables (a replication term) or detail tables 
(a data warehousing term). This reference uses "master tables" for consistency. 
The databases containing the master tables are called the master databases.

When you create a materialized view, Oracle Database creates one internal table and at least one index,
and may create one view, all in the schema of the materialized view. Oracle Database uses these objects to maintain the materialized view data. You must have the privileges necessary to create these objects.

Materialized View Log
When DML changes are made to master table data, Oracle Database stores rows describing those changes in the materialized view log and then uses the materialized view log to refresh materialized views based on the master table. This process is called incremental or fast refresh. Without a materialized view log, Oracle Database must re-execute the materialized view query to refresh the materialized view. This process is called a complete refresh. Usually, a fast refresh takes less time than a complete refresh.

A materialized view log is located in the master database in the same schema as the master table. A master table can have only one materialized view log defined on it. Oracle Database can use this materialized view log to perform fast refreshes for all fast-refreshable materialized views based on the master table.

To fast refresh a materialized join view, you must create a materialized view log for each of the tables referenced by the materialized view.

Privileges required
create materialized view
create any materialized view
drop any materialized view
delete any table
insert any table
lock any table
select any table
under any table
update any table
create table
create view

Syntax (Fast Refresh)
CREATE MATERIALIZED VIEW <schema.name>
PCTFREE <integer>
PCTUSED <integer>
TABLESPACE <tablespace_name>
BUILD IMMEDIATE
REFRESH <FAST | FORCE> ON <COMMIT | DEMAND>
<USING INDEX | USING NO INDEX>
INITRANS <integer>
STORAGE CLAUSE

AS (<SQL statement>);

Example:
CREATE MATERIALIZED VIEW mv_simple
TABLESPACE YAKU9
BUILD IMMEDIATE
REFRESH FAST ON COMMIT AS
SELECT * FROM servers;

Syntax (Force Refresh)
CREATE MATERIALIZED VIEW <schema.name>
PCTFREE <integer>
PCTUSED <integer>
TABLESPACE <tablespace_name>
BUILD IMMEDIATE
REFRESH <FAST | FORCE> ON <COMMIT | DEMAND>
AS
 (<SQL statement>);

Example:
CREATE MATERIALIZED VIEW mv_force
TABLESPACE YAKU9
NOCACHE
LOGGING
NOCOMPRESS
NOPARALLEL
BUILD IMMEDIATE
REFRESH FORCE ON DEMAND
WITH ROWID AS
SELECT * FROM servers;

Syntax (Complete Refresh)
CREATE MATERIALIZED VIEW <schema.name>
PCTFREE <integer>
PCTUSED <integer>
TABLESPACE <tablespace_name>
REFRESH <COMPLETE | FORCE>
START WITH <date>
NEXT <date_calculation>
[FOR UPDATE]
AS (<SQL statement>);

Example:
CREATE MATERIALIZED VIEW mv_complete
TABLESPACE YAKU9
REFRESH COMPLETESTART WITH SYSDATE
NEXT SYSDATE + 1
AS SELECT s.srvr_idi.installstatusCOUNT(*)
FROM servers s, serv_inst i
WHERE s.srvr_id = i.srvr_id
GROUP BY s.srvr_idi.installstatus;

Syntax (Complete Refresh Using Index)
CREATE MATERIALIZED VIEW <schema.name>
[LOGGING] [CACHE]
PCTFREE <integer>
PCTUSED <integer>
USING INDEX
TABLESPACE <tablespace_name>
REFRESH <COMPLETE | FORCE>
START WITH <date>
NEXT <date_calculation>
[FOR UPDATE]
AS (<SQL statement>);

Example:
CREATE SNAPSHOT mv_w_index
LOGGING CACHE
PCTFREE 0 PCTUSED 99
TABLESPACE YAKU9
REFRESH COMPLETE
AS SELECT s.srvr_idCOUNT(*)
FROM servers s, serv_inst i
WHERE s.srvr_id = i.srvr_id
GROUP BY s.srvr_id;

Syntax (Prebuilt Table)
CREATE MATERIALIZED VIEW <schema.name>
PCTFREE <integer>
PCTUSED <integer>
TABLESPACE <tablespace_name>
REFRESH <COMPLETE | FORCE>
START WITH <date>
NEXT <date_calculation>
[FOR UPDATE]
AS (<SQL statement>);

Example:
CREATE TABLE mv_yaku(
month VARCHAR2(8),
state VARCHAR2(40),
sales NUMBER(10,2));

CREATE MATERIALIZED VIEW mv_yaku
ON PREBUILT TABLE WITH REDUCED PRECISION
AS SELECT t.calendar_month_desc AS month,
c.cust_state_province AS state,
SUM(s.amount_sold) AS sales
FROM times t, customers c, sales s
WHERE s.time_id = t.time_id AND s.cust_id = c.cust_id
GROUP BY t.calendar_month_descc.cust_state_province;

Syntax (Enable Query Rewrite)
CREATE MATERIALIZED VIEW <schema.name>
PCTFREE <integer>
PCTUSED <integer>
TABLESPACE <tablespace_name>
REFRESH <COMPLETE | FORCE>
START WITH <date>
NEXT <date_calculation>
[FOR UPDATE]
AS (<SQL statement>);

Example:
set linesize 121
col name format a30
col value format a30

SELECT name, value
FROM gv$parameter
WHERE name LIKE '%rewrite%';

EXPLAIN PLAN FOR
SELECT s.srvr_idi.installstatusCOUNT(*)
FROM servers s, serv_inst i
WHERE s.srvr_id = i.srvr_id
AND s.srvr_id = 502
GROUP BY s.srvr_idi.installstatus;

SELECT * FROM TABLE(dbms_xplan.display);

CREATE MATERIALIZED VIEW mv_rewritea
TABLESPACE yaku
REFRESH ON DEMAND
ENABLE QUERY REWRITE
AS SELECT s.srvr_idi.installstatusCOUNT(*)
FROM servers s, serv_inst i
WHERE s.srvr_id = i.srvr_id
GROUP BY s.srvr_idi.installstatus;

EXPLAIN PLAN FOR
SELECT s.srvr_idi.installstatusCOUNT(*)
FROM servers s, serv_inst i
WHERE s.srvr_id = i.srvr_id
AND s.srvr_id = 502
GROUP BY s.srvr_idi.installstatus;

SELECT * FROM TABLE(dbms_xplan.display);

-- if the base table may be updated then
ALTER SESSION
SET query_rewrite_integrity = STALE_TOLERATED;

Normalization with Example

Why do we need to do normalization?

To eliminate redundancy of data i.e. having same information stored at multiple places, which eventually
 be difficult to maintain and will also increase the size of our database.
With normalization we will have tables with fewer columns which will make data retrieval and insert, 
update and delete operations more efficient.


What do we mean when we say a table is not in normalized form?

Let’s take an example to understand this,
Say I want to create a database which stores my friends name and their top three favorite artists.
This database would be quite a simple so initially I’ll be having only one table in it say friends table. 
Here FID is the primary key.


FIDFNAMEFavoriteArtist
1SrihariAkon, The Corrs, Robbie Williams.
2ArvindEnigma, Chicane, Shania Twain

This table is not in normal form why?

FavoriteArtist column is not atomic or doesn’t have scalar value i.e. it has having more that one value.
Let’s modify this table

FIDFNAMEFavoriteArtist1FavoriteArtist2FavoriteArtist3
1SrihariAkon.The CorrsRobbie Williams.
2ArvindEnigmaChicaneShania Twain

This table is also not in normal form why?

We have now changed our table and now each column has only one value!! (So what’s left?)
Because here we are having multiple columns with same kind of value.

I.e. repeating group of data or repeating columns.

So what we need to do to make it normal or at least bring it in First Normal Form?
  1. We’ll first break our single table into two.
  2. Each table should have information about only one entity so it would be nice if we store our friend’s 
information in one table and his favorite artists’ information in another
(For simplicity we are working with few columns but in real world scenario there could be column
 like friend’s phone no, email , address and favorites artists albums, awards received by them, 
country etc. So in that case having two different tables would make complete sense)


FIDFNAME
1Srihari
2Arvind
FIDFavorite Artist
1Akon.
1The Corrs
1Robbie Williams
2Enigma
2Chicane
2Shania Twain

FID foreign key in FavoriteArtist table which refers to FID in our Friends Table.

Now we can say that our table is in first normal form.

Remember For First Normal Form

1...Column values should be atomic, scalar or should be holding single value
2...No repetition of information or values in multiple columns.

3...So what does Second Normal Form means?


 Second normal form our database should already be in first normal form and every non-key column must
 depend on entire primary key.

Here we can say that our Friend database was already in second normal form l.
Why?

Because we don’t have composite primary key in our friends and favorite artists table.

Composite primary keys are- primary keys made up of more than one column. But there is no such thing
 in our database.
But still let’s try to understand second normal form with another example
This is our new table
GadgetsSupplierCostSupplier Address
HeadphoneAbaci123$New York
Mp3 PlayerSagas250$California
HeadphoneMayas100$London

In about table ITEM+SUPPLIER together form a composite primary key.

Let’s check for dependency

If I know gadget can I know the cost?

No same gadget is provided my different supplier at different rate.

If I know supplier can I know about the cost?

No because same supplier can provide me with different gadgets.

If I know both gadget and supplier can I know cost?

Yes than we can.

So cost is fully dependent (functionally dependent) on our composite primary key (Gadgets+Supplier)

Let’s start with another non-key column Supplier Address.

If I know gadget will I come to know about supplier address?

Obviously no.

If I know who the supplier is can I have it address?

Yes.

So here supplier is not completely dependent on (partial dependent) on our composite primary key
 (Gadgets+Supplier).

This table is surely not in Second Normal Form.

So what do we need to do to bring it in second normal form?

Here again we’ll break the table in two.
GadgetsSupplierCost
HeadphoneAbaci123$
Mp3 PlayerSagas250$
HeadphoneMayas100$
SupplierSupplier Address
AbaciNew York
SagasCalifornia
MayasLondon

We now how to normalize till second normal form.

But let’s take a break over here and learn some definitions and terms.

Composite Key: -Composite key is a primary key composed of multiple columns.
Functional Dependency – When value of one column is dependent on another column.

So that if value of one column changes the value of other column changes as well.

e.g. Supplier Address is functionally dependent on supplier name. If supplier’s name is changed in a record
 we need to change the supplier address as well.

S.Supplier–àS.SupplierAddress

“In our s table supplier address column is functionally dependent on the supplier column”

Partial Functional Dependency – A non-key column is dependent on some, but not all the columns in a composite primary key.

In our above example Supplier Address was partially dependent on our composite key columns (Gadgets+Supplier).

Transitive Dependencytransitive dependency is a type of functional dependency in which the value in a non-key column is determined by the value in another non-key column.

With these definitions in mind let’s move to Third Normal Form.
For a table in third normal form
  • It should already be in Second Normal Form.
  • There should be no transitive dependency, i.e. we shouldn’t have any non-key column depending on any other non-key column.
Again we need to make sure that the non-key columns depend upon the primary key and not on any other non-key column.

AlbumArtistNo. of tracksCountry
Come on overShania Twain11Canada
HistoryMichael Jackson15USA
UpShania Twain11Canada
MCMXC A.D.Enigma8Spain
The cross of changesEnigma10Spain

Although the above table looks fine but still there is something in it because of which we will normalize it further.

Album is the primary key of the above table.

Artist and No. of tracks are functionally dependent on the Album(primary key).

But can we say the same of Country as well?

In the above table Country value is getting repeated because of artist.

So in our above table Country column is depended on Artist column which is a non-key column.

So we will move that information in another table and could save table from redundancy i.e. repeating values of Country column.

AlbumArtistNo. of tracks
Come on overShania Twain11
HistoryMichael Jackson15
UpShania Twain11
MCMXC A.D.Enigma8
The cross of changesEnigma10
ArtistCountry
Shania TwainCanada
Michael JacksonUSA
EnigmaSpain


Normally this is considered enough and we don’t really go on applying the other normal forms. 

Most of real-world application has databases which are in third normal forms.
=========================================================================
Normalization In other words: 

NameDescription
First Normal FormAn entity is in First Normal Form (1NF) when all tables are two-dimensional with no repeating groups.

A row is in first normal form (1NF) if all underlying domains contain atomic values only. 1NF eliminates repeating groups by putting each into a separate table and connecting them with a one-to-many relationship. Make a separate table for each set of related attributes and uniquely identify each record with a primary key.
  • Eliminate duplicative columns from the same table.
  • Create separate tables for each group of related data and identify each row with a unique column or set of columns (the primary key).
Second Normal FormAn entity is in Second Normal Form (2NF) when it meets the requirement of being in First Normal Form (1NF) and additionally:
  • Does not have a composite primary key. Meaning that the primary key can not be subdivided into separate logical entities.
  • All the non-key columns are functionally dependent on the entire primary key.
  • A row is in second normal form if, and only if, it is in first normal form and every non-key attribute is fully dependent on the key.
  • 2NF eliminates functional dependencies on a partial key by putting the fields in a separate table from those that are dependent on the whole key. An example is resolving many:many relationships using an intersecting entity.
Third Normal FormAn entity is in Third Normal Form (3NF) when it meets the requirement of being in Second Normal Form (2NF) and additionally:
  • Functional dependencies on non-key fields are eliminated by putting them in a separate table. At this level, all non-key fields are dependent on the primary key.
  • A row is in third normal form if and only if it is in second normal form and if attributes that do not contribute to a description of the primary key are move into a separate table. An example is creating look-up tables.
Boyce-Codd Normal FormBoyce Codd Normal Form (BCNF) is a further refinement of 3NF. In his later writings Codd refers to BCNF as 3NF. A row is in Boyce Codd normal form if, and only if, every determinant is a candidate key. Most entities in 3NF are already in BCNF.

BCNF covers very specific situations where 3NF misses inter-dependencies between non-key (but candidate key) attributes. Typically, any relation that is in 3NF is also in BCNF. However, a 3NF relation won't be in BCNF if (a) there are multiple candidate keys, (b) the keys are composed of multiple attributes, and (c) there are common attributes between the keys.
Fourth Normal FormAn entity is in Fourth Normal Form (4NF) when it meets the requirement of being in Third Normal Form (3NF) and additionally:
  • Has no multiple sets of multi-valued dependencies. In other words, 4NF states that no entity can have more than a single one-to-many relationship within an entity if the one-to-many attributes are independent of each other.
  • Many:many relationships are resolved independently.
Fifth Normal FormAn entity is in Fifth Normal Form (5NF) if, and only if, it is in 4NF and every join dependency for the entity is a consequence of its candidate keys.

Understanding of sql loader, oracle export/import and oracle data pump.


Oracle SQL Loader
Oracle SQL Loader is a utility for loading data into an Oracle database and is often used for transporting data from a non-Oracle source system to an Oracle data warehouse. It is a versatile utility that can load data in almost any format, can load multiple files at the same time into multiple tables and can load data from files on disk, on tape or from a named pipe.

It runs in one of 3 modes:

conventional load.
direct-path load.
external-path load.

The conventional load is the default method and has less restrictions (see below) than the direct-path load which is generally much faster but less flexible. The direct-path load is faster for large data sets as it doesn't generate any undo data and bypasses the database buffer cache but it is limited to use just on heap tables (see below for the other restrictions). The external-path load creates an external table for the specified data file and then executes SQL INSERT statements to load the data into the target table.

This mode has 2 advantages over direct-path and conventional loads:

1.         If a data file is big enough it will be loaded in parallel;
2.         The source data can be modified by SQL and PL/SQL functions as it is being loaded.

SQL*Loader enables you to:

Load data from multiple files (from disk, tape or named pipe) into one or more target tables in the same load
Load data in any character set supported by Oracle
Load or discard records depending on values in the input fields
Transform the data before loading using SQL functions
Generate unique sequential keys for specified columns
Append to existing data or replace existing data
Load large objects (lobs), collections (nested tables and varrays) and object-relational data

How It Works

Oracle SQL Loader processes the input data files according to the directions in a text file called the control file which specifies the names and locations of the source data files, the format of the data to be loaded and the data transformations to be performed when loading.

As the input files are processed, any records that do not pass format checks are written to the bad file and any records that do not meet the specified selection criteria are written to the discard file.

Records that pass both format and selection criteria are written to the specified target tables but they may still be rejected because of, for example, constraint violations in which case they are written to the bad file along with those records rejected for being invalid.

A log file containing a detailed summary of the load, including a description of any errors that occurred during the load is also produced.

How to Use Oracle SQL Loader

Oracle SQL Loader is initiated from the command line and the various parameters such as the name of the control file and the userid can be specified at the same time but it is generally a lot easier to put all these parameters into a parameter file, thereby saving on typing and frustration when typos are made. Which of the conventional/direct path/external path load types to use is a trade off between performance and flexibility.

The default load type is the conventional load which creates and executes SQL insert statements to load the data into the target tables.

This method is better when:

other users need to be able update data in the target tables whilst new data is being loaded into them;
loading data into clustered tables;
loading a relatively small amount of rows into a large indexed table as the load process makes a copy of the original index before merging in the new keys (this is a relatively slow process for a large table);
loading data into a large table with referential or column check integrity constraints as these constraints are disabled during a direct path load and re-enabled when the load finishes requiring the whole table to be checked;
you want to ensure that a record will be rejected if it causes an Oracle error, is formatted incorrectly or violates a constraint on the target table;
insert triggers must be fired.      
           
The direct-path load is initiated by specifying DIRECT=TRUE when starting Oracle SQL Loader. This method writes formatted data blocks directly to the target tables bypassing the SQL layer making loading faster but has the restrictions mentioned above. This method should be used when you need to load a large amount of data and need to maximise performance.

When using external-path load the source file has to be in a location accessible to the database and specified by an Oracle directory and the user must have been granted read and write access to the Oracle directory.

Oracle export and import

The Oracle export and import utilities are provided by Oracle as a way of making logical backups of the database to complement physical backups made of the data files either by using RMAN or the o/s copy utility and of copying data from one database to another. This may be carried out for testing and/or training purposes or when upgrading to a new release of Oracle.

In Oracle 10g and later releases, these utilities are superseded by Oracle Data Pump which acts in much the same way as the original export and import utilities but is designed to make the process of transferring data from one database to another faster and more secure. Oracle Data Pump also provides a new network mode which removes the need to create intermediate files.

How do the Export and Import utilities work and what are they used for?

The Oracle export and import utilities are used to provide logical backups of objects in the database to supplement the physical backups made for example with RMAN (the Recovery Manager).

These utilities can work on several different levels: (full) database, tablespace, user (schema) or table levels. They can only be used for logical backup and recovery (for example to recover the data of a table accidentally truncated by a user) because the export utility can only take a snapshot of the data at the time it is run and unlike with objects restored from physical backups can't be rolled forward by using the redo logs.

Backups made using the Oracle export utility are performed with the database open so in this sense they are hot (online) backups. However recovered data can't be rolled forward as this would require the redo logs from the same point in time as the export.
This means exports must be performed when there are no updates occurring on the objects being exported to ensure that the backup is consistent and can therefore be used for recovery.

Use of the export utility is also a good way to capture the metadata of the database/schemas/tables as the export process generates the SQL statements to recreate the database objects (with or without the associated data) and stores them in the export dump file (by default called expdat.dmp).

The exported file is in a proprietary (binary) format and can only be read by the import utility, and not by the other utilities such as RMAN or SQL*Loader.

A new feature was added to the Oracle export utility in Oracle 9i to enable the export of just a subset of the table data by using a query to specify the data to be exported.

Summary

Although no longer supported in Oracle 11g (except for downgrading to a previous release), the Oracle export and import utilities have for many years been the primary utilities used for supplemental (logical) backups and for transferring data between databases. They are relatively easy to use and understand and reasonably quick to run for small datasets.

The main disadvantage to using them has been that the availability of an export dump file containing unencrypted data from a production database is a security risk - anyone with access to the file can copy it and load it into another database. For this and other reasons, these utilities are being replaced with Oracle Data Pump import and export.

Oracle Data Pump

Oracle Data Pump is a server-side utility that replaces the venerable export and import utilities. The export and import utilities have been provided by Oracle for many years as a way of making logical backups of the database and of copying data from one database to another and Oracle data pump does exactly the same thing.

So why were they replaced?

Well, one of the biggest problems with the old export and import utilities is that there was no security.  These tools ran on the client and the export dump file was stored on the  client and was therefore accessible to anyone who could log on to the pc.

As Data Pump runs on the server the export dump files can be stored only in directory objects created by the dba and the dba therefore controls who has access to the dump files.  Oracle data pump jobs also run asynchronously which enables you to start a job (to export or import data), disconnect and then re-connect later to monitor progress

How does Oracle Data Pump work?

Oracle Data Pump consists of 3 distinct components:

1.     expdp and impdp (the command line interface tools)
2.     the PL/SQL package DBMS_DATAPUMP (the data pump api)
3.     the PL/SQL package DBMS_METADATA (the meta data api)

Expdp and impdp are used to export data from and import data to the database respectively by invoking sub routines in DBMS_DATAPUMP.
Meta data is extracted and loaded by use of the DBMS_METADATA package.

As with the export and import utilities data pump can work on several different levels: (full) database, tablespace, transportable tablespace, user (schema) or table levels. Unlike export/import, however, data pump can use several different methods to export or import data.

These methods are:

1.     data file copying
2.     direct path
3.     external tables
4.     conventional path
5.     network link

Data file copying

The quickest way to copy data from one database to another is just to transfer the meta data (the information about the structure of the database such as tables, partitions, sub partitions) via use of Oracle  data pump and then physically copy the relevant data files via use of operating system commands. This method  is most commonly used for transferring a whole tablespace (when the TRANSPORT_TABLESPACES parameter is specified) but can also be used at the table level when TRANSPORTABLE=ALWAYS is specified. The character set must be the same on both databases to be able to use this method.

Direct path transfer

This is the second fastest method of transferring data as it bypasses the SQL layer of the database and is used automatically by data pump when the structure of the tables being exported imported allow it. There are different restrictions for using this method on export and import - the full set of conditions is available in the Oracle utilities documentation.

External tables

When direct path transfer can't be used the next best method is to use the external table mechanism which is similar to but not the same as the SQL*Loader external table mechanism. When using this method the dump file is regarded as a table (just like any other except that it is outside the Oracle database) and the database SQL engine is then used to import or export the relevant data.  Oracle data pump will select this method automatically if it can't use direct path load or unload and data may be loaded using the external table mechanism having been unloaded using direct path and vice versa.

Conventional path

The conventional path is used when none of the other methods can be used for loading/unloading data because of the way the tables have been built.  More work has to be done by the database to load or unload data this way so performance is generally slower with this method.

Network link

The data pump import and export utilities can also use a database link to load or unload data from a remote database. However this is the slowest method as all the data has to be transferred over the network and is not recommended for large volumes of data. This method has to be used for read only databases.

NOTE: Data Pump does not work with utilities older than the 10g release 1

Oracle RECYCLE BIN


Oracle 10g introduced the recycle bin. You can recover a table that you have dropped from the Oracle by using the flashback table command.

What is Recycle Bin
Oracle has introduced "Recycle Bin" Feature Oracle 10g to store all the dropped objects.
If any table in Oracle 10g is dropped then any associated objects to this table such as indexes, constraints and other dependant objects are simply renamed with a prefix of BIN$$.

Why Recycle Bin
A user drops a very important table--accidentally, of course--and it needs to be revived as soon as possible.

Oracle9i Database introduced the concept of a Flashback Query option to retrieve data from a point in time in the past, but it can't flash back DDL operations such as dropping a table.
The only recourse is to use tables pace point-in-time recovery in a different database and then recreate the table in the current database using export/import or some other method.

This procedure demands significant DBA effort as well as precious time, not to mention the use of a different database for cloning.
But with Oracle 10g Recycle bin Feature the user can easily restore the Dropped Object.
How to Enable/Disable Recycle Bin

Note: By default RECYCLEBIN is enabled:

The Following property sets the Recycle Bin to be enabled or not for the DB.

SQL > SELECT Value FROM V$parameter WHERE Name = 'recyclebin';
Value
-----
On

if the Value Is “on” then recyclebin feature is enabled for the Database.
If the Value is “off” the recyclebin feature is disabled.

The following commands are used to enable or Disable the Feature

SQL > ALTER SYSTEM SET recyclebin = ON;
or
SQL > ALTER SESSION SET recyclebin = ON;
SQL > ALTER SYSTEM SET recyclebin = OFF;
or
SQL > ALTER SESSION SET recyclebin = OFF;

Show the Contents in RECYCLEBIN

Use the following commands to show all the objects that are stored in Recycle Bin,

SQL > SHOW RECYCLEBIN;
Or
SQL > SELECT * FROM USER_RECYCLEBIN;

Example

The Following Example explains the moving the object to the Recyclebin

SQL > CREATE TABLE TEST_RBIN(VAL   NUMBER);
SQL > INSERT INTO TEST_RBIN(VAL) VALUES(10);
SQL > COMMIT;
SQL > DROP TABLE TEST_RBIN;

Print the Recycle bin Entries

SQL > SHOW RECYCLEBIN;

ORIGINAL NAME      RECYCLEBIN NAME                 OBJECT TYPE           DROP TIME
----------------   ------------------------------  --- ------------     -------------------
TEST_RBIN              BIN$7fq9jEy8RSadimoE4xGjWw==$0  TABLE          2010-05-26:11:27:12

Restore the Objects 

User can restore the Dropped tables by issuing the following commands, the following Commands can be used to restore the dropped Objects,
SQL > FLASHBACK TABLE <> TO BEFORE DROP;

Example

SQL > FLASHBACK TABLE TEST_RBIN TO BEFORE DROP;
SQL > SELECT * FROM TEST_RBIN;
       VAL
       ---
       10

It is possible to restore the table in to different name by issuing the following SQL Command,

SQL > FLASHBACK TABLE << Dropped Table Name >> TO BEFORE DROP RENAME TO <>;
SQL > FLASHBACK TABLE TEST_RBIN TO BEFORE DROP RENAME TO TEST_RBIN1;

Using the above statement, its possible to restore the various version of the table data if the table
 is created and Dropped more than once.
While restoring system restores the table in Descending order.

Example

SQL > CREATE TABLE TEST_RBIN (COL1 NUMBER);
SQL > INSERT INTO TEST_RBIN VALUES (1);
SQL > COMMIT;
SQL > DROP TABLE TEST_RBIN;
SQL > CREATE TABLE TEST_RBIN (COL1 NUMBER);
SQL > INSERT INTO TEST_RBIN VALUES (2);
SQL > COMMIT;
SQL > DROP TABLE TEST_RBIN;
SQL > CREATE TABLE TEST_RBIN (COL1 NUMBER);
SQL > INSERT INTO TEST_RBIN VALUES (3);
SQL > COMMIT;
SQL > DROP TABLE TEST_RBIN;
SQL > SHOW RECYCLEBIN;

ORIGINAL NAME    RECYCLEBIN NAME                  OBJECT TYPE      DROP TIME
 ---------------- ------------------------------  ------------     -------------------
TEST_RBIN        BIN$2e51YTa3RSK8TL/mPy+FuA==$0      TABLE       2010-05-27:15:23:43
TEST_RBIN        BIN$5dF60S3GSEO70SSYREaqCg==$0      TABLE         2010-05-27:15:23:43
TEST_RBIN        BIN$JHCDN9YwQR67XjXGOJcCIg==$0      TABLE         2010-05-27:15:23:42

SQL > FLASHBACK TABLE TEST_RBIN TO BEFORE DROP RENAME TO TEST_RBIN1;
SQL > FLASHBACK TABLE TEST_RBIN TO BEFORE DROP RENAME TO TEST_RBIN2;
SQL > FLASHBACK TABLE TEST_RBIN TO BEFORE DROP RENAME TO TEST_RBIN3;
SQL > SELECT * FROM TEST_RBIN1;
      COL1
----------
         3
SQL > SELECT * FROM TEST_RBIN2;
      COL1
----------
         2
SQL > SELECT * FROM TEST_RBIN3;
      COL1
    ----------
         1
Note:
The un-drop feature brings the table back to its original name, but not the associated objects like indexes and triggers, which are left with the recycled names. Sources such as views and procedures defined on the table are not recompiled and remain in the invalid state.
These old names must be retrieved manually and then applied to the flashed-back table.

Clearing the Recycle Bin

To Clear the Recycle bin the following SQL Statements can be used.
To Clean only a particular table, the following Statement will be used,

SQL > PURGE TABLE << Table_Name >>

Its possible to Purge the table based on the System Generated table name also

SQL > PURGE TABLE << Table_Name >>

This command will remove the table and all dependent objects such as indexes, constraints, and so on from the recycle bin, saving some space. To permanently drop an index from the recycle bin, the following statement can be used.

SQL > PURGE  INDEX  <>;

This will remove the index only, leaving the copy of the table in the recycle bin.
Sometimes it might be useful to purge at a higher level. For instance, to purge all the objects in recycle bin in a particular tablespace,

SQL > PURGE TABLESPACE <
>;


It is possible to purge only the recycle bin for a particular user in that tablespace.
This approach could come handy in data warehouse-type environments where users create and drop many transient tables.:

SQL > PURGE TABLESPACE <
> USER <>;


To clear the complete recycle bin , the following statement can be used

SQL > PURGE Recyclebin;

It is possible to purge the table while dropping the table itself.
The following statement would be used to achieve this

SQL > DROP TABLE  << Table_Name >> PURGE;

Example
Create the Table and then Drop the table,

SQL >  CREATE TABLE TEST_RBIN (COL1 NUMBER);
SQL >  INSERT INTO TEST_RBIN VALUES (1);
SQL > COMMIT;
SQL > DROP TABLE TEST_RBIN;

View the recycle bin entries with show recycle bin option,

SQL > SHOW RECYCLEBIN;

ORIGINAL NAME    RECYCLEBIN NAME                  OBJECT TYPE      DROP TIME
 ---------------- ------------------------------  ------------     -------------------
TEST_RBIN        BIN$2e51YTa3RSK8TL/mPy+FuA==$0   TABLE         2010-05-27:15:23:43
Clear the table by executing the following Statement
SQL > PURGE TABLE TEST_RBIN; or
SQL > PURGE TABLE “BIN$2e51YTa3RSK8TL/mPy+FuA==$0”

Verify the Recycle bin, it will not have any entries for the table TEST_RBIN,

SQL > SHOW RECYCLEBIN;

Note:
Once the Table is purged from recycle bin, it will not be possible to restore by using FLASHBACK command.
Space/Quota Issue

Objects in the Recycle Bin will remain in the database until the owner of the dropped objects decides to permanently remove them using the new PURGE command.
The Recycle Bin objects are counted against a user's quota. But Flashback Drop is a non-intrusive feature.
Objects in the Recycle Bin will be automatically purged by the space reclamation process if

1. A user creates a new table or adds data that causes his/her quota to be exceeded.
2. The tablespace needs to extend its file size to accommodate create/insert operations

source: http://www.orafaq.com/node/2397