Cross Platform Oracle Database migration

Cross Platform Oracle Database migration is one of many other activities that every DBA has to plan precisely, leaving no room to regret.
Challenges to consider when migrating data
1. Performance
2. Primary volume/source data protection
3. Different storage medium
4. Different hardware platforms
5. Application downtime
6. Data corruption, missing data or data loss
7. Technical compatibility issues.
Let’s start with some basic detail to pen down first –
# Database size?
# Current platform vs Target Platform
# Same Endian or differ ?
# How much downtime can be given?
# List of data types?
– Let’s consider a scenerio of database size  1TB.
– Consider current platform as Linux vs Target platform AIX
– AIX is certainly a big Endian than to Linux being little Endian.
You can query this as below
SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM;
  PLATFORM_ID PLATFORM_NAME                    ENDIAN_FORMAT
  ———– ——————————– ————–
          1 Solaris[tm] OE (32-bit)          Big
          2 Solaris[tm] OE (64-bit)          Big
          7 Microsoft Windows IA (32-bit)    Little
         10 Linux IA (32-bit)                Little
          6 AIX-Based Systems (64-bit)       Big
          3 HP-UX (64-bit)                   Big
          5 HP Tru64 UNIX                    Little
          4 HP-UX IA (64-bit)                Big
         11 Linux IA (64-bit)                Little
         15 HP Open VMS                      Little
          8 Microsoft Windows IA (64-bit)    Little
          9 IBM zSeries Based Linux          Big
         13 Linux 64-bit for AMD             Little
         16 Apple Mac OS                     Big
         12 Microsoft Windows 64-bit for AMD Little
         17 Solaris Operating System (x86)   Little
– Assume 24 hours during weekend.

Option One – Migrate to different Endian Platform Using Transportable Tablespaces With RMAN

The following high-level steps describe how to migrate a database to a new platform using transportable tablespace:

1.- Create a new, empty database on the destination platform.
2.- Import objects required for transport operations from the source database into the destination database.
3.- Export transportable metadata for all user tablespaces from the source database.
4.- Transfer data files for user tablespaces to the destination system.
5.- Use RMAN to convert the data files to the endian format of the destination system.
6.- Import transportable metadata for all user tablespaces into the destination database.
7.- Import the remaining database objects and metadata (that were not moved by the transport operation)
from the source database into the destination database.
Option Two – Cross Platform Database Migration Guide Using Import and Export Utility
Data Pump Export and Import
Prior to oracle 10G, the logical backup was taken through EXP/IMP utility, which was the basic export/import utility till oracle 9i. Oracle has come with more powerful logical backup tool in oracle 10G. As we know about datapump export/import utility.
Data Pump takes the old export and import utilities one step further, you can have total control over the job running (stop it, pause it, check it, restart it). Data pump is a server side technology and it can transfer large amounts of data very quickly using parallel streams to achieve maximum throughput, they can be 15-45% faster than the older import/export utilities.
Advantages of using data pump
1. ability to estimate jobs times
2. ability to restart failed jobs
3. perform fine-grained object selection
4. monitor running jobs
5. directly load a database from a remote instance via the network
6. remapping capabilities
7. improved performance using parallel executions
As we performed expdp with PARALLEL=2, can we use impdp with PARALLEL=8 (because the target system has more CPUs)?
Yes you can. For Data Pump Import, the PARALLEL parameter value should not be much larger than the number of files in the dump file set.
You can set the degree of parallelism to two times the number of CPUs, then tune from there.
The PARALLEL parameter works a bit differently in Import than Export. Because there are various dependencies that exist when creating objects during import,
everything must be done in order. For Import, no data loading can occur until the tables are created because data cannot be loaded into tables that do not yet exist.
Data Pump Import processes the database objects in the following order:
 1.    The first worker begins to load all the metadata: the tablespaces, schemas, etc., until all the tables are created.
2.    Once the tables are created, the first worker starts loading data instead of metadata and the rest of the workers start loading data too.
3.    Once the table data is loaded, the first worker returns to loading metadata again.
The rest of the workers are idle until the first worker loads all the metadata up to package bodies.
4.    Multiple workers load package bodies in parallel.
5.    One worker loads metadata up to and including secondary tables.
6.    Multiple workers load secondary table data.
7.    One worker loads the remaining metadata.
Note:  One worker creates all the indexes but uses PX processes up to the PARALLEL value so indexes get created faster. Thus, an import job can be started with a PARALLEL=8, and the user will only see one worker being utilized at certain points during job execution. No other workers or Parallel Execution Processes will be working until all the tables are created. When the tables are created, a burst of workers and possibly PX processes will execute in parallel until the data is loaded,then the worker processes will become idle.
In conclusion, Both the options can be explored. Undoubtedly export and import is an old proven method still may take significant time when compared to Transportable Tablespaces With RMAN. Both have certain advantages and disadvantages however approach could be decided based on various factors as mentioned earlier.

Be the first to comment

Leave a Reply

Your email address will not be published.


*