2. Primary volume/source data protection
3. Different storage medium
4. Different hardware platforms
5. Application downtime
6. Data corruption, missing data or data loss
7. Technical compatibility issues.
# Database size?
# Current platform vs Target Platform
# Same Endian or differ ?
# How much downtime can be given?
# List of data types?
– Consider current platform as Linux vs Target platform AIX
– AIX is certainly a big Endian than to Linux being little Endian.
You can query this as below
———– ——————————– ————–
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
7 Microsoft Windows IA (32-bit) Little
10 Linux IA (32-bit) Little
6 AIX-Based Systems (64-bit) Big
3 HP-UX (64-bit) Big
5 HP Tru64 UNIX Little
4 HP-UX IA (64-bit) Big
11 Linux IA (64-bit) Little
15 HP Open VMS Little
8 Microsoft Windows IA (64-bit) Little
9 IBM zSeries Based Linux Big
13 Linux 64-bit for AMD Little
16 Apple Mac OS Big
12 Microsoft Windows 64-bit for AMD Little
17 Solaris Operating System (x86) Little
– Assume 24 hours during weekend.
Option One – Migrate to different Endian Platform Using Transportable Tablespaces With RMAN
The following high-level steps describe how to migrate a database to a new platform using transportable tablespace:
2.- Import objects required for transport operations from the source database into the destination database.
3.- Export transportable metadata for all user tablespaces from the source database.
4.- Transfer data files for user tablespaces to the destination system.
5.- Use RMAN to convert the data files to the endian format of the destination system.
6.- Import transportable metadata for all user tablespaces into the destination database.
7.- Import the remaining database objects and metadata (that were not moved by the transport operation)
from the source database into the destination database.
Prior to oracle 10G, the logical backup was taken through EXP/IMP utility, which was the basic export/import utility till oracle 9i. Oracle has come with more powerful logical backup tool in oracle 10G. As we know about datapump export/import utility.
Data Pump takes the old export and import utilities one step further, you can have total control over the job running (stop it, pause it, check it, restart it). Data pump is a server side technology and it can transfer large amounts of data very quickly using parallel streams to achieve maximum throughput, they can be 15-45% faster than the older import/export utilities.
Advantages of using data pump
1. ability to estimate jobs times
2. ability to restart failed jobs
3. perform fine-grained object selection
4. monitor running jobs
5. directly load a database from a remote instance via the network
6. remapping capabilities
7. improved performance using parallel executions
Yes you can. For Data Pump Import, the PARALLEL parameter value should not be much larger than the number of files in the dump file set.
You can set the degree of parallelism to two times the number of CPUs, then tune from there.
everything must be done in order. For Import, no data loading can occur until the tables are created because data cannot be loaded into tables that do not yet exist.
2. Once the tables are created, the first worker starts loading data instead of metadata and the rest of the workers start loading data too.
3. Once the table data is loaded, the first worker returns to loading metadata again.
The rest of the workers are idle until the first worker loads all the metadata up to package bodies.
4. Multiple workers load package bodies in parallel.
5. One worker loads metadata up to and including secondary tables.
6. Multiple workers load secondary table data.
7. One worker loads the remaining metadata.