Friday, December 13, 2013

Top Command Analyzing in linux environmetns

How do I determine CPU and Memory utilization, based on running processes in Linux using TOP?
Top command provides a real-time look at what is happening with your system. Top produces so much output that a new user may get over whelmed with all that’s presented and what it means.
Let’s take a look at TOP one line at a time.
The first line in top:
top - 22:09:08 up 14 min,  1 user,  load average: 0.21, 0.23, 0.30
“22:09:08″ is the current time; “up 14 min” shows how long the system has been up for; “1 user” how many users are logged in; “load average: 0.21, 0.23, 0.30″ the load average of the system (1minute, 5 minutes, 15 minutes).
Load average is an extensive topic and to understand its inner workings can be daunting. The simplest of definitions states that load average is the cpu utilization over a period of time. A load average of 1 means your cpu is being fully utilized and processes are not having to wait to use a CPU. A load average above 1 indicates that processes need to wait and your system will be less responsive. If your load average is consistently above 3 and your system is running slow you may want to upgrade to more CPU’s or a faster CPU.
The second line in top:
Tasks:  82 total,   1 running,  81 sleeping,   0 stopped,   0 zombie
Shows the number of processes and their current state.
The third line in top:
Cpu(s):  9.5%us, 31.2%sy,  0.0%ni, 27.0%id,  7.6%wa,  1.0%hi, 23.7%si,  0.0%st
Shows CPU utilization details. “9.5%us” user processes are using 9.5%; “31.2%sy” system processes are using 31.2%; “27.0%id” percentage of available cpu; “7.6%wa” time CPU is waiting for IO.
When first analyzing the Cpu(s) line in top look at the %id to see how much cpu is available. If %id is low then focus on %us, %sy, and %wa to determine what is using the CPU.
The fourth and fifth lines in top:
Mem:    255592k total,   167568k used,    88024k free,    25068k buffers
Swap:   524280k total,        0k used,   524280k free,    85724k cached
Describes the memory usage. These numbers can be misleading. “255592k total” is total memory in the system; “167568K used” is the part of the RAM that currently contains information; “88024k free” is the part of RAM that contains no information; “25068K buffers and 85724k cached” is the buffered and cached data for IO.
So what is the actual amount of free RAM available for programs to use ?
The answer is: free + (buffers + cached)
88024k + (25068k + 85724k) = 198816k
How much RAM is being used by progams ?
The answer is: used – (buffers + cached)
167568k – (25068k + 85724k) = 56776k
The processes information:
Top will display the process using the most CPU usage in descending order. Lets describe each column that represents a process.
 PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
3166 apache    15   0 29444 6112 1524 S  6.6  2.4   0:00.79 httpd
PID – process ID of the process
USER – User who is running the process
PR – The priority of the process
NI – Nice value of the process (higher value indicates lower priority)
VIRT – The total amount of virtual memory used
RES – Resident task size
SHR – Amount of shared memory used
S – State of the task. Values are S (sleeping), D (uninterruptible sleep), R (running), Z(zombies), or (stopped or traced)
%CPU – Percentage of CPU used
%MEM – Percentage of Memory used
TIME+ – Total CPU time used
COMMAND – Command issued
Interacting with TOP
Now that we are able to understand the output from TOP lets learn how to change the way the output is displayed.
Just press the following key while running top and the output will be sorted in real time.
M – Sort by memory usage
P – Sort by CPU usage
T – Sort by cumulative time
z – Color display
k – Kill a process
q – quit
If we want to kill the process with PID 3161, then press “k” and a prompt will ask you for the PID number, and enter 3161.
Command Line Parameters with TOP
You can control what top displays by issuing parameters when you run top.
- d – Controls the delay between refreshes
- p – Specify the process by PID that you want to monitor
-n – Update the display this number of times and then exit
If we want to only monitor the http process with a PID of 3166
$ top -p 3166
If we want to change the delay between refreshes to 5 seconds
$ top -d 5

Sunday, November 3, 2013

DataGuard Troubleshooting queries and parameters

Standby Database:
selectNAME,DATABASE_ROLE,OPEN_MODE,PROTECTION_MODE,PROTECTION_LEVEL, CURRENT_SCN,FLASHBACK_ON,FORCE_LOGGING from v$database;

select inst_id,process, status, client_process, thread#, sequence#, block#, blocks  from gv$managed_standby
 where process = 'MRP0';

Starting  MRP0 :
RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION PARALLEL 67;

For RAC use
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE  THROUGH ALL SWITCHOVER DISCONNECT  USING CURRENT LOGFILE;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE  THROUGH ALL SWITCHOVER DISCONNECT FROM SESSION PARALLEL 132 USING CURRENT LOGFILE;

On Standby
select * from gv$active_instances;
ps -ef|grep -i mrp
select PROCESS,STATUS,THREAD#,SEQUENCE#,BLOCK#,BLOCKS,DELAY_MINS from v$managed_standby;
Defer Log Shipping:
alter system set log_archive_dest_state_2=defer scope=both;
alter system set dg_broker_start=false;

Enable Log Shipping:
alter system set log_archive_dest_state_2 = 'enable';
alter system set dg_broker_start=true;

Starting the STANDBY DATABASE:
startup nomount
alter database mount standby database;
alter database recover managed standby database disconnect from session;

Checking For Dataguard Errorr:
select to_char(timestamp,'DD/MM/YY HH24:MI:SS') timestamp,severity, message_num, message from v$dataguard_status where severity in ('Error','Fatal') order by timestamp;
select  *  from v$ARCHIVE_GAP;

Missing Logs on Standby:
select local.thread# , local.sequence# from (select thread# , sequence# from v$archived_log where dest_id=1) local where local.sequence# not in (select sequence# from v$archived_log where dest_id=2 and thread# = local.thread#) ;

Starting MRP0:
RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Stoping MRP0:
RECOVER MANAGED STANDBY DATABASE CANCEL;

MRP0 STATUS - RAC:
select inst_id,process, status, client_process, thread#, sequence#, block#, blocks from gv$managed_standby where process = 'MRPO';
select severity, error_code,message,to_char(timestamp,'DD-MON-YYYY HH24:MI:SS') from v$dataguard_status;

Registring Logfile
alter database register logfile  ' 26457 ';

How To Check Oracle Physical Standby is in Sync with the Primary or Not? 
On Primary
set pages 1000
set lines 120
column DEST_NAME format a20
column DESTINATION format a35
column ARCHIVER format a10
column TARGET format a15
column status format a10
column error format a15
select DEST_ID,DEST_NAME,DESTINATION,TARGET,STATUS,ERROR from v$archive_dest where DESTINATION is NOT NULL
/

SELECT THREAD# "Thread",SEQUENCE# "Last Sequence generated"  FROM V$ARCHIVED_LOG  WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)  ORDER BY 1
/
select max(sequence#),thread# from gv$log group by thread#;

set numwidth 15
select max(sequence#) current_seq from v$log;
/
On Standby
SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"  FROM  (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,  (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL  WHERE  ARCH.THREAD# = APPL.THREAD#  ORDER BY 1;
/
SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;
/
select PROCESS,STATUS,THREAD#,SEQUENCE#,BLOCK#,BLOCKS,DELAY_MINS from v$managed_standby;
/
select max(sequence#),thread# from gv$archived_log where applied='YES' group by thread#;
/
set numwidth 15
select max(applied_seq#) last_seq from v$archive_dest_status;
/

Check which logs are missing
Run this on the standby...
select local.thread#, local.sequence# from   (select thread#  ,  sequence#   from    v$archived_log   where dest_id=1)  local
where  local.sequence# not in  (select sequence#  from v$archived_log  where dest_id=2 and   thread# = local.thread#)
/
Display info about all log destinations
To be run on the primary
set lines 100
set numwidth 15
column ID format 99
column "SRLs" format 99
column active format 99
col type format a4
select ds.dest_id id, ad.status, ds.database_mode db_mode, ad.archiver type, ds.recovery_mode, ds.protection_mode, ds.standby_logfile_count "SRLs" , ds.standby_logfile_active active, ds.archived_seq# from v$archive_dest_status ds, v$archive_dest ad where ds.dest_id = ad.dest_id and ad.status != 'INACTIVE'  order by ds.dest_id
/
Display log destinations options
To be run on the primary
set numwidth 8 lines 100
column id format 99
select dest_id id , archiver, transmit_mode, affirm , async_blocks async, net_timeout net_time, delay_mins delay, reopen_secs reopen
, register,binding  from v$archive_dest order by dest_id;
/
MRP Speed
Set Linesize 400
Col Values For A65
Col Recover_start For A21
Select To_char(START_TIME,'Dd.Mm.Yyyy Hh24:Mi:ss') "Recover_start",To_char(Item)||' = '||To_char(Sofar)||' '||To_char(Units)||' '|| To_char(TIMESTAMP,'Dd.Mm.Yyyy Hh24:Mi') "Values" From V$Recovery_progress Where Start_time=(Select Max(Start_time) From V$Recovery_progress);
/
TIME IT TOOK TO APPLY A LOG
  select TIMESTAMP,completion_time "ArchTime",SEQUENCE#,round((blocks*block_size)/(1024*1024),1) "SizeM",round((TIMESTAMP-lag(TIMESTAMP,1,TIMESTAMP) OVER (order by TIMESTAMP))*24*60*60,1) "Diff(sec)",round((blocks*block_size)/1024/ decode(((TIMESTAMP-lag(TIMESTAMP,1,TIMESTAMP) OVER (order by TIMESTAMP))*24*60*60),0,1, (TIMESTAMP-lag(TIMESTAMP,1,TIMESTAMP) OVER (order by TIMESTAMP))*24*60*60),1) "KB/sec", round((blocks*block_size)/(1024*1024)/ decode(((TIMESTAMP-lag(TIMESTAMP,1,TIMESTAMP)
OVER (order by TIMESTAMP))*24*60*60),0,1, (TIMESTAMP-lag(TIMESTAMP,1,TIMESTAMP) OVER (order by TIMESTAMP))*24*60*60),3) "MB/sec",
round(((lead(TIMESTAMP,1,TIMESTAMP) over (order by TIMESTAMP))-completion_time)*24*60*60,1) "Lag(sec)" from v$archived_log a, v$dataguard_status dgs where a.name = replace(dgs.MESSAGE,'Media Recovery Log ','') and dgs.FACILITY = 'Log Apply Services'
order by TIMESTAMP desc;
/

Problem: Recovery service has stopped for a while and there has been a gap between primary and standby side. After recovery process was started again, standby side is not able to catch primary side because of low log applying performance. Disk I/O and memory utilization on standby server are nearly 100%.

Solution:
1 – Rebooting the standby server reduced memory utilization a little.
2 – ALTER DATABASE RECOVER MANAGED STANDBY DATABASE PARALLEL8 DISCONNECT FROM SESSION;
In general, using the parallel recovery option is most effective at reducing recovery time when several datafiles on several different disks are being recovered concurrently. The performance improvement from the parallel recovery option is also dependent upon whether the operating system supports asynchronous I/O. If asynchronous I/O is not supported, the parallel recovery option can dramatically reduce recovery time. If asynchronous I/O is supported, the recovery time may be only slightly reduced by using parallel recovery.
3 -alter system Set PARALLEL_EXECUTION_MESSAGE_SIZE = 4096 scope = spfile;
Set PARALLEL_EXECUTION_MESSAGE_SIZE = 4096
When using parallel media recovery or parallel standby recovery, increasing the PARALLEL_EXECUTION_MESSAGE_SIZE database parameter to 4K (4096) can improve parallel recovery by as much as 20 percent. Set this parameter on both the primary and standby databases in preparation for switchover operations. Increasing this parameter requires more memory from the shared pool by each parallel execution slave process.
4 – Kernel parameters that changed in order to reduce file system cache size.
dbc_max_pct 10 10 Immed
dbc_min_pct 3 3 Immed



Sunday, October 13, 2013

remap_table parameter in datapump

remap_table Parameter of Data-Pump in Oracle 11g


Oracle 11g datapump provide a new feature remap_table command to remap the table data to new table name on target database.we can use the REMAP_TABLE parameter to rename entire tables.
Syntax :
REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename .

In 10g datapump ,we use the REMAP_SCHEMA parameter to remap the schema name during the import or we use the FROMUSER and TOUSER parameters in the original import . There is no parameter to remap table names . This means that Import DataPump can only import data into a table with the same name as the original table.

If we have to import a table data having same structure into a database i.e, it is containing the table with same name then we have to perform it in two ways .

I.) Rename the original source table temporarily : 

II.) If the original source table cannot be rename then follow the below steps :
a.) import the dump into another schemas.
b.) rename the table name.
c.) Again export the table .
d.) Finally import the table name .

Remap_table allows us to rename tables during an import operation . Here is demo of the remap_table :

Here , we will create a table and take export of it and import it in the same schemas . In this scenario we have table name "test" and we will rename it as "newtest".

1.) Create a table  "test"

SQL> conn hr/hr@noida
Connected.
SQL> create table test(id number);
Table created.
SQL> insert into test values (1);
1 row created.
SQL> insert into test values (2);
1 row created.
SQL> insert into test values (3);
1 row created.
SQL> insert into test values (4);
1 row created.
SQL> commit;
Commit complete.

SQL> select * from test;
        ID
----------
         1
         2
         3
         4

2.) Export the table "test"

SQL> host expdp hr/hr@noida    dumpfile=hr_test.dmp    logfile=hrtestlog.log     tables=test

Export: Release 11.2.0.1.0 - Production on Fri May 27 11:20:43 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "HR"."SYS_EXPORT_TABLE_01":  hr/********@noida dumpfile=hr_test.dmp logfile=hrtestlog.log tables=test
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "HR"."TEST"                                 5.031 KB       4 rows
Master table "HR"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for HR.SYS_EXPORT_TABLE_01 is:
  D:\APP\NEERAJS\ADMIN\NOIDA\DPDUMP\HR_TEST.DMP
Job "HR"."SYS_EXPORT_TABLE_01" successfully completed at 11:21:16

Since,we have the dump of the table "test". We import into hr schemas with new name  "newtest"

3.) Import the dump with remap_table Parameter

SQL>host impdp hr/hr@noida  dumpfile=hr_test.dmp logfile=imphrtestlog.log remap_table=hr.test:newtest
Import: Release 11.2.0.1.0 - Production on Fri May 27 11:22:11 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "HR"."SYS_IMPORT_FULL_04" successfully loaded/unloaded
Starting "HR"."SYS_IMPORT_FULL_04":  hr/********@noida dumpfile=hr_test.dmp logfile=imphrtestlog.log remap_table=hr.test:newtest
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "HR"."NEWTEST"                              5.031 KB       4 rows
Job "HR"."SYS_IMPORT_FULL_04" successfully completed at 11:22:25

Since the job is successfully completed .So we check the imported table i.e, "newtest"

SQL> select * from tab;
TNAME                                     TABTYPE                         CLUSTERID
----------------------                       ------------                         ----------------
COUNTRIES                               TABLE
DEPARTMENTS                         TABLE
EMPLOYEES                              TABLE
EMP_DETAILS_VIEW                VIEW
JOBS                                          TABLE
JOB_HISTORY                           TABLE
LOCATIONS                               TABLE
NEWTEST                                  TABLE
REGIONS                                    TABLE
SYS_IMPORT_FULL_01             TABLE
SYS_IMPORT_FULL_02             TABLE
SYS_IMPORT_FULL_03             TABLE
TEST                                            TABLE

13 rows selected.

SQL> select * from newtest;
        ID
----------
         1
         2
         3
         4

Note : Only objects created by the Import will be remapped. In particular, preexisting tables will not be remapped if TABLE_EXISTS_ACTION is set to TRUNCATE or APPEND

Data Pump

Question On Oracle Data Pump

Here are some question related to Data Pump which will help you to clear your doubt regarding Data Pump.

1.) What is Oracle Data Pump?
Oracle Data Pump is a new feature of Oracle  Database 10g  that provides  high speed, parallel,bulk data and metadata movement of Oracle database contents. A new public interface package, DBMS_DATAPUMP, provides a server-side infrastructure for fast data and metadata movement. In Oracle Database 10g, new Export (expdp) and Import (impdp) clients that use this interface have been provided. Oracle recommends that customers use these new Data Pump Export and  Import clients rather than the Original Export and Import clients, since the new utilities have vastly improved performance and greatly enhanced functionality.

2.)  Is Data Pump a feature or an option of Oracle 10g?
Data Pump is a fully integrated feature of Oracle Database 10g. Data Pump is installed automatically during database creation and database upgrade.

3.) What platforms is Data Pump provided on?
Data Pump is available on the Oracle Database 10g Standard Edition, Enterprise Edition, and Personal Edition. However, the  parallel capability  is only available on  Oracle10g   Enterprise Edition. Data Pump is  included on all the same platforms  supported  by Oracle 10g, including Unix, Linux, Windows NT, Windows 2000, and Windows XP.

4.) What are the system requirements for Data Pump?
The  Data  Pump  system requirements  are  the same  as  the  standard Oracle  Database 10g requirements. Data Pump doesn’t need a lot of additional system or database resources, but the time to extract and treat the information will be dependent on the CPU and memory available on each  machine. If  system  resource  consumption becomes  an issue  while a Data Pump job is executing,  the  job  can be  dynamically  throttled to  reduce the number  of  execution  threads.

5.) What is the performance gain of Data Pump Export versus Original Export?
Using  the  Direct  Path  method  of  unloading, a single  stream  of data  unload is about 2 times faster than  original  Export  because  the Direct  Path API  has been  modified to be  even  more efficient. Depending on the level of parallelism, the level of improvement can be much more.

6.) What is the performance gain of Data Pump Import versus Original Import?
A  single  stream  of data load is 15-45 times  faster  Original  Import. The reason it  is  so much faster  is that Conventional  Import uses only  conventional mode  inserts, whereas  Data Pump Import uses the Direct Path  method  of  loading. As  with Export, the  job can be parallelized for even more improvement.

7.) Does Data Pump require special tuning to attain performance gains?
No, Data Pump requires no special tuning. It runs optimally “out of the box”. Original Export and (especially) Import require careful tuning to achieve optimum results.

8.) Why are directory objects needed?
They are needed to ensure data security and integrity. Otherwise, users would be able to read data that they should not have access to and perform unwarranted operations on the server.

9.) What makes Data Pump faster than original Export and Import?
There are three main reasons that Data Pump is faster than original Export and Import. First,the Direct Path data access method (which permits the server to bypass SQL and go right to the data blocks on disk) has been rewritten to be much more efficient and now supports Data Pump Import and Export. Second, because Data Pump does its processing on the server rather than in the client, much less data has to be moved between client and server. Finally, Data Pump was designed from the ground up to take advantage of modern hardware and operating system architectures in ways that original Export/ and Import cannot. These factors combine to produce significant performance improvements for Data Pump over original Export and Import .

10.) How much faster is Data Pump than the original Export and Import utilities?
For a single stream, Data Pump Export is approximately 2 times faster than original Export and Data Pump Import is approximately 15 to 40 times faster than original Import. Speed can be dramatically improved using the PARALLEL parameter.

11.) Why is Data Pump slower on small jobs?
Data Pump was designed for big jobs with lots of data. Each Data Pump job has a master table that has all the information about the job and is needed for restartability. The overhead of creating this master table makes small jobs take longer, but the speed in processing large amounts of data gives Data Pump a significant advantage in medium and larger jobs.

12.) Are original Export and Import going away?
Original Export is being deprecated with the Oracle Database 11g release. Original Import will always be supported so that dump files from earlier releases (release 5.0 and later) will be able to be imported. Original and Data Pump dump file formats are not compatible.

13.) Are Data Pump dump files and original Export and Import dump files compatible?
No, the dump files are not compatible or interchangeable. If you have original Export dump  files, you must use original Import to load them.

14.) How can I monitor my Data Pump jobs to see what is going on?
In interactive mode, you can get a lot of detail through the STATUS command. In SQL, you can query the following views:
  • *       DBA_DATAPUMP_JOBS - all active Data Pump jobs and the state of each job
  • *       USER_DATAPUMP_JOBS – summary of the user’s active Data Pump jobs
  • *       DBA_DATAPUMP_SESSIONS – all active user sessions that are attached to a Data Pump Job
  • *       V$SESSION_LONGOPS – shows all progress on each active Data Pump job

 15.) Can you adjust the level of parallelism dynamically for more or less resource consumption?
Yes, you can dynamically  throttle the number of threads of execution  throughout the lifetime of the job. There is an interactive command mode where you can adjust the level of parallelism. So, for example, you can start up a job during the day with a PARALLEL=2, and then increase it at night to a higher level.

16.) Can I use gzip with Data Pump?
Because Data Pump uses parallel operations to achieve its high performance, you cannot pipe the output of Data Pump export through gzip. Starting in Oracle Database 11g, the  COMPRESSION parameter can be used to compress a Data Pump dump file as it is being created. The  COMPRESSION parameter is available as part of the Advanced Compression Option for Oracle Database 11g

17.)Does Data Pump support all data types?
Yes, all the Oracle database data types are supported via Data Pump’s two data movement mechanisms, Direct Path and External Tables.

18.) What kind of object selection capability is available with Data Pump?
With  Data Pump, there  is  much  more  flexibility  in  selecting  objects  for  unload  and  load operations . You can now unload any subset of database objects  (such as functions, packages, and procedures) and  reload them  on the target platform. Almost  all database object   types can  be excluded or  included in an  operation using  the new  Exclude and Include parameters.

19.) Is it necessary to use the Command line interface or is there a GUI that you can use?
You can either use the Command line interface or the Oracle Enterprise Manager web-based GUI interface.

20.) Can I move a dump file set across platforms, such as from Sun to HP?
Yes, Data Pump handles all the necessary compatibility issues between hardware platforms and operating systems.

21.) Can I take 1 dump file set from my source database and import it into multiple databases?
Yes, a single dump file set can be imported into multiple databases. You can also just import different subsets of the data out of that single dump file set.

22.) Is there a way to estimate the size of an export job before it gets underway?
Yes, you can use the “ESTIMATE ONLY” command to see how much disk space is required for the job’s dump file set before you start the operation.

23.) Can I monitor a Data Pump Export or Import job while the job is in progress?
Yes, jobs can be monitored from any location is going on. Clients may also detach from an executing job without affecting it.

24.) If a job is stopped either voluntarily or involuntarily, can I restart it?
Yes, every Data Pump job creates a Master Table in which the entire record of the job is maintained. The Master Table is the directory to the job, so if a job is stopped for any reason, it can be restarted at a later point in time, without losing any data.

25.) Does Data Pump give me the ability to manipulate the Data Definition Language (DDL)?
Yes, with  Data Pump, it  is  now  possible to change the definition of some  objects  as  they are Created  at  import time. For example, you  can remap  the  source  datafile  name  to the  target datafile name in all DDL statements where the source datafile is referenced. This is really useful if you are moving across platforms with different file system syntax.

26.) Is Network Mode supported on Data Pump?
Yes, Data Pump Export and Import both support a network mode in which the job’s source is a remote oracle instance. This is an overlap of unloading the data, using Export, and loading the data, using Import, so those processes don’t have to be serialized. A database  link  is used for the   network.  You  don’t  have  to worry  about  allocating  file  space  because  there  are  no intermediate dump files.

27.) Does Data Pump support Flashback?
Yes, Data Pump supports the Flashback infrastructure, so you can perform an export and get a dump file set that is consistent with a specified point in time or SCN.

28.) Can I still use Original Export? Do I have to convert to Data Pump Export?
An Oracle9i compatible Export that operates against Oracle Database 10g will ship with Oracle 10g, but   it  does  not export  Oracle Database  10g features. Also, Data Pump Export  has new Syntax   and a new    client executable, so Original  Export  scripts will  need to change.  Oracle recommends that customers convert to use the Oracle Data Pump Export.

29.) How do I import an old dump file into Oracle 10g? Can I use Original Import or do I have to  convert to Data Pump Import?
Original  Import  will be  maintained  and  shipped  forever, so  that  Oracle  Version 5.0  through Oracle9i  dump  files will  be able  to be loaded into Oracle 10g and later. Data Pump Import can only read Oracle Database 11g (and later) Data Pump Export dump files. Data Pump Import has new syntax and a new client executable, so Original Import scripts will need to change.

30.) When would I use SQL*Loader instead of Data Pump Export and Import?
You would use SQL*Loader to load data from external files into tables of an Oracle database.Many customers use SQL*Loader on a daily basis to load files (e.g. financial feeds) into their databases. Data Pump Export and Import may be used less frequently, but for very important tasks, such as migrating between platforms, moving data between development, test, and production databases, logical database backup, and for application deployment throughout a corporation.

31.)When would I use Transportable Tablespaces instead of Data Pump Export and Import?
You would use Transportable Tablespaces when you want to move an entire tablespace of data from one Oracle database to another. Transportable Tablespaces allows Oracle data files to be unplugged from a database, moved or copied to another location, and then plugged into another database. Moving data using Transportable Tablespaces can be much faster than performing either an export or import of the same data, because transporting a tablespace only requires the copying of datafiles and integrating the tablespace dictionary information. Even when transporting a tablespace, Data Pump Export and Import are still used to handle the extraction and recreation of the metadata for that tablespace.

Conclusion
Data Pump is fast and flexible. It replaces original Export and Import starting in Oracle Database 10g.Moving to Data Pump is easy, and opens up a world of new options and features.