Question On Oracle Data Pump
Here are some question related to Data Pump which will help you to clear your doubt regarding Data Pump.
1.) What is Oracle Data Pump?
Oracle Data Pump is a new feature of Oracle Database 10g that provides high speed, parallel,bulk
data and metadata movement of Oracle database contents. A new public
interface package, DBMS_DATAPUMP, provides a server-side infrastructure
for fast data and metadata movement. In Oracle Database 10g, new Export
(expdp) and Import (impdp) clients that use this interface have been
provided. Oracle recommends that customers use these new Data Pump
Export and Import clients rather than the Original Export and Import
clients, since the new utilities have vastly improved performance and
greatly enhanced functionality.
2.) Is Data Pump a feature or an option of Oracle 10g?
Data Pump is a fully integrated feature of Oracle Database 10g. Data Pump is installed automatically during database creation and database upgrade.
3.) What platforms is Data Pump provided on?
Data Pump is available on the Oracle Database 10g Standard Edition, Enterprise Edition, and Personal Edition. However, the parallel capability is only available on Oracle10g Enterprise Edition. Data Pump is included on all the same platforms supported by Oracle 10g, including Unix, Linux, Windows NT, Windows 2000, and Windows XP.
4.) What are the system requirements for Data Pump?
The Data Pump system requirements are the same as the standard Oracle Database 10g requirements. Data Pump doesn’t need a lot of additional system or database resources, but the time to extract and treat the information will be dependent on the CPU and memory available on each machine. If system resource consumption becomes an issue while a Data Pump job is executing, the job can be dynamically throttled to reduce the number of execution threads.
5.) What is the performance gain of Data Pump Export versus Original Export?
Using the Direct Path method of unloading, a single stream of data unload is about 2 times faster
than original Export because the Direct Path API has been
modified to be even more efficient. Depending on the level of
parallelism, the level of improvement can be much more.
6.) What is the performance gain of Data Pump Import versus Original Import?
A single stream of data load is 15-45 times faster Original Import. The reason it is so much faster is that Conventional Import uses only conventional mode inserts, whereas Data Pump Import uses the Direct Path method of loading. As with Export, the job can be parallelized for even more improvement.
7.) Does Data Pump require special tuning to attain performance gains?
No, Data Pump requires no special tuning. It runs optimally “out of the box”. Original Export and (especially) Import require careful tuning to achieve optimum results.
8.) Why are directory objects needed?
They
are needed to ensure data security and integrity. Otherwise, users
would be able to read data that they should not have access to and
perform unwarranted operations on the server.
9.) What makes Data Pump faster than original Export and Import?
There
are three main reasons that Data Pump is faster than original Export
and Import. First,the Direct Path data access method (which permits the
server to bypass SQL and go right to the data blocks on disk) has been
rewritten to be much more efficient and now supports Data Pump Import
and Export. Second, because Data Pump does its processing on the server
rather than in the client, much less data has to be moved between client
and server. Finally, Data Pump was designed from the ground up to take
advantage of modern hardware and operating system architectures in ways
that original Export/ and Import cannot. These factors combine to
produce significant performance improvements for Data Pump over original
Export and Import .
10.) How much faster is Data Pump than the original Export and Import utilities?
For
a single stream, Data Pump Export is approximately 2 times faster than
original Export and Data Pump Import is approximately 15 to 40 times
faster than original Import. Speed can be dramatically improved using
the PARALLEL parameter.
11.) Why is Data Pump slower on small jobs?
Data
Pump was designed for big jobs with lots of data. Each Data Pump job
has a master table that has all the information about the job and is
needed for restartability. The overhead of creating this master table
makes small jobs take longer, but the speed in processing large amounts
of data gives Data Pump a significant advantage in medium and larger
jobs.
12.) Are original Export and Import going away?
Original
Export is being deprecated with the Oracle Database 11g release.
Original Import will always be supported so that dump files from earlier
releases (release 5.0 and later) will be able to be imported. Original
and Data Pump dump file formats are not compatible.
13.) Are Data Pump dump files and original Export and Import dump files compatible?
No,
the dump files are not compatible or interchangeable. If you have
original Export dump files, you must use original Import to load them.
14.) How can I monitor my Data Pump jobs to see what is going on?
In interactive mode, you can get a lot of detail through the STATUS command. In SQL, you can query the following views:
- DBA_DATAPUMP_JOBS - all active Data Pump jobs and the state of each job
- USER_DATAPUMP_JOBS – summary of the user’s active Data Pump jobs
- DBA_DATAPUMP_SESSIONS – all active user sessions that are attached to a Data Pump Job
- V$SESSION_LONGOPS – shows all progress on each active Data Pump job
15.) Can you adjust the level of parallelism dynamically for more or less resource consumption?
Yes, you can dynamically throttle the number of threads of execution throughout the lifetime of the
job. There is an interactive command mode where you can adjust the
level of parallelism. So, for example, you can start up a job during the
day with a PARALLEL=2, and then increase it at night to a higher level.
16.) Can I use gzip with Data Pump?
Because
Data Pump uses parallel operations to achieve its high performance, you
cannot pipe the output of Data Pump export through gzip. Starting in
Oracle Database 11g, the COMPRESSION parameter can be used to compress a
Data Pump dump file as it is being created. The COMPRESSION parameter
is available as part of the Advanced Compression Option for Oracle
Database 11g
17.)Does Data Pump support all data types?
Yes,
all the Oracle database data types are supported via Data Pump’s two
data movement mechanisms, Direct Path and External Tables.
18.) What kind of object selection capability is available with Data Pump?
With
Data Pump, there is much more flexibility in selecting objects
for unload and load operations . You can now unload any subset of
database objects (such as functions, packages, and
procedures) and reload them on the target platform. Almost all
database object types can be excluded or included in an operation
using the new Exclude and Include parameters.
19.) Is it necessary to use the Command line interface or is there a GUI that you can use?
You can either use the Command line interface or the Oracle Enterprise Manager web-based GUI interface.
20.) Can I move a dump file set across platforms, such as from Sun to HP?
Yes, Data Pump handles all the necessary compatibility issues between hardware platforms and operating systems.
21.) Can I take 1 dump file set from my source database and import it into multiple databases?
Yes,
a single dump file set can be imported into multiple databases. You can
also just import different subsets of the data out of that single dump
file set.
22.) Is there a way to estimate the size of an export job before it gets underway?
Yes,
you can use the “ESTIMATE ONLY” command to see how much disk space is
required for the job’s dump file set before you start the operation.
23.) Can I monitor a Data Pump Export or Import job while the job is in progress?
Yes, jobs can be monitored from any location is going on. Clients may also detach from an executing job without affecting it.
24.) If a job is stopped either voluntarily or involuntarily, can I restart it?
Yes,
every Data Pump job creates a Master Table in which the entire record
of the job is maintained. The Master Table is the directory to the job,
so if a job is stopped for any reason, it can be restarted at a later
point in time, without losing any data.
25.) Does Data Pump give me the ability to manipulate the Data Definition Language (DDL)?
Yes, with Data Pump, it is now possible to change the definition of some objects as they are Created at import time. For example, you can remap the source datafile name to the target datafile name in all DDL statements where the source datafile is referenced. This is really useful if you are moving across platforms with different file system syntax.
26.) Is Network Mode supported on Data Pump?
Yes, Data Pump Export and Import both support a network mode in which the job’s source is a remote oracle instance. This is an overlap of unloading the data, using Export, and loading the data, using Import, so those processes don’t have to be serialized. A database link is used for the network. You don’t have to worry about allocating file space because there are no intermediate dump files.
27.) Does Data Pump support Flashback?
Yes, Data Pump supports the Flashback infrastructure, so you can perform an export and get a dump file set that is consistent with a specified point in time or SCN.
28.) Can I still use Original Export? Do I have to convert to Data Pump Export?
An Oracle9i compatible Export that operates against Oracle Database 10g will ship with Oracle 10g, but it does not export Oracle Database 10g features. Also, Data Pump Export has new Syntax and a new client executable, so Original Export scripts will need to change. Oracle recommends that customers convert to use the Oracle Data Pump Export.
29.) How do I import an old dump file into Oracle 10g? Can I use Original Import or do I have to convert to Data Pump Import?
Original Import will be maintained and shipped forever, so that Oracle Version 5.0 through Oracle9i dump files will be able to be loaded into Oracle 10g and later. Data Pump Import can only read Oracle Database 11g (and later) Data Pump Export dump files. Data Pump Import has new syntax and a new client executable, so Original Import scripts will need to change.
30.) When would I use SQL*Loader instead of Data Pump Export and Import?
You
would use SQL*Loader to load data from external files into tables of an
Oracle database.Many customers use SQL*Loader on a daily basis to load
files (e.g. financial feeds) into their databases. Data Pump Export and
Import may be used less frequently, but for very important tasks, such
as migrating between platforms, moving data between development, test,
and production databases, logical database backup, and for application
deployment throughout a corporation.
31.)When would I use Transportable Tablespaces instead of Data Pump Export and Import?
You
would use Transportable Tablespaces when you want to move an entire
tablespace of data from one Oracle database to another. Transportable
Tablespaces allows Oracle data files to be unplugged from a database,
moved or copied to another location, and then plugged into another
database. Moving data using Transportable Tablespaces can be much faster
than performing either an export or import of the same data, because
transporting a tablespace only requires the copying of datafiles and
integrating the tablespace dictionary information. Even when
transporting a tablespace, Data Pump Export and Import are still used to
handle the extraction and recreation of the metadata for that
tablespace.
Conclusion
Data
Pump is fast and flexible. It replaces original Export and Import
starting in Oracle Database 10g.Moving to Data Pump is easy, and opens
up a world of new options and features.