WHAT'S NEW?
Loading...
Connect to sys using sqlplus in admin command prompt
Check whether your database is in archive log mode or not. Run command ARCHIVE LOG LIST. It will show you the following details:
It is always preferable to have your database in archive log mode. The reason being if your database is in archive log mode you can recover from all committed changes in the event of an OS or disk failure.

This query will display the path of redo log files, their group and their status. It is always recommended to have a minimum of two members in one group.


As you can see I have only 1 member in group 6 whose current status is INACTIVE. I intentionally have 1 member to generate a scenario for the sake of this practical.


Now go to the specified path where the redo log member of group 6 is located and open it in a notepad. It will show some symbolic data. Delete couple of lines and save it the original location which is the one we got as a result of query.


Why did we do the above step i.e. why did we delete some lines of the inactive redo log file?
The reason is we cannot corrupt the redo log file of the member whose status is CURRENT. Also since I want to explain what is the danger when you have only 1 member in the group and also about the risks of the command which we will be using to restart our database.


Another question is since the status of the member whose redo log file we just corrupted is inactive, then how are we going to check or confirm whether the redo log file is actually corrupt or not and what impact it is going to have on our database?


We certainly cannot wait till that member becomes current to get the error, if any. So this is what we are going to do: We will force the database to switch the log file using

Keep repeating above two commands till the member of group 6 becomes CURRENT while status of other members become INACTIVE.


When it becomes CURRENT, Run SHUT IMMEDIATE and then STARTUP. You should get the following error/s:

But this is one of the most common errors you will get when learning database. So how are we going to know that our redo log file is corrupted and that is the actual reason why we cannot startup our database?


I know I know you are all freaked out since the database isn’t starting... In such case you should recite the vintage and famous magical words:
ABRA KA DABRA!!!

Keep reciting those 3 words till you realize that there is a beautiful text document which is located in the following location

It is the alert_admin (or alert_SID where SID stands for System Identifier. For example if your database name is orcl then the name will be alert_orcl) text document which records or logs everything that’s happening within your database.


Open that document in Notepad (for Windows) or VI (for Linux), then scroll down till the end (or press CTRL+END) and look for the following errors:

And thus we have successfully corrupted our redo log file and the impact of that is we are not able to startup our database! YIPEEE!! Wait a min I don’t think we should be happy about it.................


Let’s take a look at the following steps to resolve these errors:

Since we cannot startup database in current session, open another session and login as sys. We will be connected to an idle instance. Then start the database in mount mode using:
Then run one of the most dangerous DBA's commands you will ever have to run which is:
BUT WAIT!!!! Before running the above command let's first understand WHY this(above) particular command?
(One of the reasons why the above command is incomplete....MUAHAHAHAHA.!!!)

Isn’t it easier if we just drop that corrupted redo log file? Or why not run this command:
The reason is: If the status of your redo log file is CURRENT then there is no archive log file created of that particular redo log file. Hence we cannot run ALTER DATABASE CLEAR LOGFILE.
To check whether your redo log file is archived or not you can run the following queries:
You can see that status of redo log files in group 2 is CURRENT (in 1st query) and a big NO in front of group 2(in 2nd query. Also displays the status, which is CURRENT).
And also database won’t allow you to drop the redo log file whose status is CURRENT. If you try to drop the redo log file whose status is CURRENT it will give you an error. Take a look at the following example:
Here status of members of group 2 is CURRENT. Now run the following command:
But then you will say "AHAHAHA I can change the status of group from CURRENT to INACTIVE by forcing a log switch and then drop the file".

Well BOOO!!! It's not going to help since you will have to drop the entire group. The reason being you cannot just drop the redo log file if it’s the single-lonely member in the group. Take a look at the following example:
As you can see I have only one member in group 6 whose status is CURRENT. I have already explained and shown why we cannot drop redo log file when its status is CURRENT.
Now I will force a log switch using:

As you can see I have forced the log switch and changed the status from CURRENT to INACTIVE. Now look and understand carefully what happens when I try to drop redo log file or member of group 6.
It throws an error saying "cannot remove last log member" meaning you cannot drop redo log member if it is the only member remaining in the group.

Then comes the next question: Can I drop the entire group? Or What happens if I drop the entire group?

The answer to that is it will work but it’s not recommended. The reason being your redo log file of that group is corrupted so you are losing your data anyways. Also if you want the group back you will have recreate the group and add a redo log member thus increasing your own efforts.


So I guess it's time to go back to the most dangerous DBA's command and see what happens after we execute it. Before you execute check your archive log list (You will know the reason later on) and I am going to repeat the steps I mentioned above about corrupting the redo log file and then execute the command. (Geez!! Be patient.)

Step 1: Check the status of your redo log files.
Step 2: Corrupt the redo log member of the inactive group and for this practical only choose the group which has only member in it.

Step 3: As mentioned earlier you won’t see the error of corruption directly on the your screen. Your might get the following error on your screen:
You will be able to see the actual error in the alert log document(Please scroll above for the information about alert log document.)
Depending upon how much you have corrupted your redo log file or which lines you have deleted you should get following errors in the alert log document:
Looking at the error you can make a wild guess that you have deleted something in the redo log file thus causing the reduction in its size.

Step 4: Then perform the following steps:
As you can see database starts in MOUNT mode and then shows the error, which means you cannot start the database in OPEN mode. But you can start the database in MOUNT mode which is exactly what we need to run the required command.

Step 5: Connect to sys since you have shut down the instance. It will connect to an idle instance.
Your database has started in MOUNT mode.

Step 6: Now run the following DBA command:
This is the complete command. The reason I have given GROUP 6 is because I have corrupted the redo log member of this group and no archive log of this redo log member is created since its status is ACTIVE.

Step 7: Check the status of your redo log file by running the following commands:
As you can see the status of the group which we have cleared is now UNUSED which is group 6 in this case. Now let’s run the same query but this time we will check sequence of the group member too.
And as you can see the sequence of the UNUSED redo log group is 0.

Step 8: Now we will run switch log file command to change the current status which is UNUSED.
OOPSSS !!! Looks like our database is not in OPEN mode. So to change the mode to OPEN mode run the following command:
Now we will try to switch log file again.
Thus we need the database to be in OPEN mode to execute the above command.
Let's check the status and sequence of the redo log file again by executing the following command:
As you can see the status of the redo log file of group 6 has now changed to CURRENT from UNUSED and we also have new sequence generated which is 2657.

Step 9: Then run the following command:
You will get a long list. Just scroll to the bottom till you see a certain sequence whose name is nothing but a blank space. In this case look for sequence# 2652. You will find a blank space before it. The reason for the blank space is that there was no archive log created of that particular log file and hence we had to run clear unarchived log file command.

Now think carefully about what will happen if a situation arises where you are required to recover your database and restore it and you don’t have the backup after executing clear unarchived log file command?

Your backup, restore and recovery will work fine till the sequence# 2651 but when it reaches sequence# 2652 there will be a big mess. Since the redo log file wasn’t archived you will lose the data from that redo log file and we have seen earlier that we need redo log file for recovery in case of media failure. And in this case since we have no archive log of this redo log file we won’t be able to recover the database after sequence# 2651.

Thus TAKE BACKUP of the whole database as soon as you execute the following command and have followed the above steps:

If you have followed the above steps perfectly then you should be able to perform this practical without any problems.
This blog will demonstrate detailed steps to restore a database on New Host using RMAN. You can also use it to move a production database to new host as test database.
Here, we are going to use the source database with the DB_NAME=ORCL and we are going to restore this ORCL database on the test database machine.

1. Preparing to restore the database on a new host:

If you use the procedure in this section, then the DBID for the restored database equals the DBID for the original database. Do not register a test database created in this way in the same recovery catalog as the source database. Because the DBID of the two databases is the same, the metadata for the test database can interfere with RMAN's ability to restore and recover the source database. 

Record the DBID for your source database. You can find the DBID by connecting to the database with RMAN as shown below:

Take backups of all the datafiles, controlfiles, spfile and archivelog files on the source database machine.
Perform following steps on the source database machine:

Ensure that the initialization parameter file and the backups of datafiles, controlfiles and archivelogs are available on the new host machine for the restore operation. Copy all the backup files from the old host to new host using operating system utility or you can transfer it through pendrive also. If you are using operating system utility then you can perform following step to copy the files from shared folder 'backup' of Neha-pc:

When we restore the database, we are not going to have the active redo logs from the source database, so we will need the last SCN of archivelogs for the restore operation. We can find it by querying V$BACKUP_ARCHIVELOG_DETAILS.

2. Creating the instance for the test machine and creating the pfile by restoring it from spfile: 

Perform the following steps on the test database machine:
Use the oradim utility to create the instance for orcl database on the test machine. 

When we start the database in nomount mode we receive error because the parameter file is not found. This is expected because we are restoring to a new host, hence, Oracle will use a “dummy” parameter file to start the instance.

Restore and edit the server parameter file which is in the SP_FILE.BKP. Since SPFILE is a binary file we need to create the file from it and then make the initialization parameter changes.

Now that we have the SPFILE, we need to edit some of the location specific parameters in the initorcl.ora file to reflect the new host's directory structure. Following are the parameters that need to be changed:

Make sure the directories are already present.

3. Restoring the database on the New Host:

Restart the instance with the edited initialization parameter file.Then, restore the control file to whatever locations you specified in the CONTROL_FILES initialization parameter and then mount the database. After that, catalog the datafiles and archivelogs from the FULL_DATAFILES_350.bkp, ARCHIVE_LOG_356.bkp and ARCHIVE_LOG_357.bkp.

Execute the following RMAN restore and recovery script. The script does the following things:
  • SET NEWNAME : For each data file on the destination host that is restored to a different path than it had on the source host, use this command to specify the new path on the test machine. If the file systems on the destination system are set up to have the same paths as the source host, then do not use SET NEWNAME for those files restored to the same path as on the source host.
  • ALTER DATABASE RENAME FILE : For each online redo log that is to be created at a different location than it had on the source host, use this SQL
    commands to specify the path name on the destination host. If the file
    systems on the destination system are set up to have the same paths as
    the source host, then do not use ALTER DATABASE RENAME FILE for those files restored to the same path as on the source host.
  • SET UNTIL : Perform this operation to limit recovery to the end of the archived redo logs. The recovery stops with an error if no SET UNTIL command is specified.
  • RESTORE the database.
  • SWITCH DATAFILE ALL : Run the SWITCH DATAFILE ALL command so that the control file recognizes the new path names as the official new names of the data files.
  • RECOVER the database.
Following is the output of the restore-script.sql. Execute the following code in RMAN:

From the RMAN prompt, open the database with the RESETLOGS option:

If all the above steps are executed properly, then you will have the production database installed on the test machine. 

We have Target Database name as TECH And we are creating a duplicate database using RMAN Auxiliary Instance with name as AUXDB by using the backup location but without making connection to target database but we will be accessing Recovery Catalog DB.

Note :- This practical assume you have same directory structures and it is performed on same server. Just the database name is different so we are converting the path with respect to dbname only.




  1. Check log_mode of Target Database. Target must be in archivelog mode. Otherwise put the database in archivelog mode.
  2. Now create a new database manually or via DBCA say CATDB. Then run the below command on CATDB to set this as the new Recovery Catalog for our Use.
  3. Connect to the newly created Recovery Catalog Via RMAN
  4. Now register your target database (tech) with newly created recovery catalog database (catdb)
  5. On Windows Platform You need to Create a new Instance for auxdb which also creates a password file for the same
  6. Create All the folders in Oracle Folder Structures.
  7. To create a pfile to start the database. You need to open a Notepad and Type db_name=auxdb and save in ORACLE_HOME\database folder and start the instance at nomount stage.
  8. Now run the below command to Duplicate Database without target connection but using Recovery Catalog.
  9. If all steps are done correctly you will have a new database clone created successfully without any connection made to target. But you need recovery manager backup for this scenario and connection to recovery catalog database.

 

 REMAP_DATA

Default: There is no default
Purpose
The REMAP_DATA parameter allows you to specify a remap function that takes as a source the original value of the designated column and returns a remapped value that will replace the original value in the dump file. A common use for this option is to mask data when moving from a production system to a test system. For example, a column of sensitive customer data such as credit card numbers could be replaced with numbers generated by a REMAP_DATA function. This would allow the data to retain its essential formatting and processing characteristics without exposing private data to unauthorized personnel.
The same function can be applied to multiple columns being dumped. This is useful when you want to guarantee consistency in remapping both the child and parent column in a referential constraint.
Syntax and Description
REMAP_DATA=[schema.]tablename.column_name:[schema.]pkg.function
The description of each syntax element, in the order in which they appear in the syntax, is as follows:
schema -- the schema containing the table to be remapped. By default, this is the schema of the user doing the export.
tablename -- the table whose column will be remapped.
column_name -- the column whose data is to be remapped. The maximum number of columns that can be remapped for a single table is 10.
schema -- the schema containing the PL/SQL package you have created that contains the remapping function. As a default, this is the schema of the user doing the export.
pkg -- the name of the PL/SQL package you have created that contains the remapping function.
function -- the name of the function within the PL/SQL that will be called to remap the column table in each row of the specified table.

Restrictions

  • The datatypes of the source argument and the returned value should both match the data type of the designated column in the table.
  • Remapping functions should not perform commits or rollbacks except in autonomous transactions.
  • The maximum number of columns you can remap on a single table is 10. You can remap 9 columns on table a and 8 columns on table b, and so on, but the maximum for each table is 10.
  • The use of synonyms as values for the REMAP_DATA parameter is not supported. For example, if the regions table in the hr schema had a synonym of regn, an error would be returned if you specified regn as part of the REMPA_DATA specification.
  • Remapping LOB column data of a remote table is not supported.
Source Reference :- docs.oracle.com

Example

  1. Create a table.
  2. Create a package Specification.
  3. Create a package body.
  4. Perform Export using expdp.
  5. Check Your Data is exported in encrypted form.
  6. Drop table to test import.
  7. Import table using impdp without remap_data parameter 
  8. You table is imported but the address column data is encrypted.
  9. Drop again to check the impact of using remap_data parameter while importing,
  10. Import with remap_data parameter.
  11. Now check again your table is imported with decrypted original data.

Check Out The Video For More Details and Practical.



Table Compression: Overview
Oracle Database supports three methods of table compression:

Basic table compression
OLTP table compression

Oracle Corporation recommends to compress all data to reduce storage costs. The Oracle database can use table compression to eliminate duplicate values in a data block. For tables with highly redundant data, compression saves disk space and reduces memory use in the database buffer cache. Table compression is transparent to database applications.
The table_compression clause is valid only for heap-organized tables. The COMPRESS keyword enables table compression. The NOCOMPRESS keyword disables table compression. NOCOMPRESS is the default.
With basic compression, the Oracle database compresses data at the time of performing bulk load using operations such as direct loads or CREATE TABLE AS SELECT.
With COMPRESS FOR OLTP, the Oracle database compresses data during all DML operations on the table. 


Understand Deferred Segment Creation

Beginning with Oracle Database 11g Release 2, when you create heap-organized tables in a locally managed tablespace, the database defers table segment creation until the first row is inserted.
In addition, segment creation is deferred for any LOB columns of the table, any indexes created implicitly as part of table creation, and any indexes subsequently explicitly created on the table.
Note:
In Release 11.2.0.1, deferred segment creation is not supported for partitioned tables. This restriction is removed in release 11.2.0.2 and later.
The advantages of this space allocation method are the following:
  • It saves a significant amount of disk space in applications that create hundreds or thousands of tables upon installation, many of which might never be populated.
  • It reduces application installation time.
There is a small performance penalty when the first row is inserted, because the new segment must be created at that time.
To enable deferred segment creation, compatibility must be set to '11.2.0' or higher.
The new clauses for the CREATE TABLE statement are:
  • SEGMENT CREATION DEFERRED
  • SEGMENT CREATION IMMEDIATE
These clauses override the default setting of the DEFERRED_SEGMENT_CREATION initialization parameter, TRUE, which defers segment creation. To disable deferred segment creation, set this parameter to FALSE.
Note that when you create a table with deferred segment creation, the new table appears in the *_TABLES views, but no entry for it appears in the *_SEGMENTS views until you insert the first row.
You can verify deferred segment creation by viewing the SEGMENT_CREATED column in *_TABLES, *_INDEXES, and *_LOBS views for non-partitioned tables, and in *_TAB_PARTITIONS, *_IND_PARTITIONS, and *_LOB_PARTITIONS views for partitioned tables.

Reference Source - docs.oracle.com 




Purpose of RMAN TSPITR

Recovery Manager (RMAN) TSPITR enables quick recovery of one or more tablespaces in a database to an earlier time without affecting the rest of the tablespaces and objects in the database.
RMAN TSPITR is most useful for the following situations:
  • To recover a logical database to a point different from the rest of the physical database, when multiple logical databases exist in separate tablespaces of one physical database. For example, you maintain logical databases in the orders and personnel tablespaces. An incorrect batch job or data manipulation language (DML) statement corrupts the data in only one tablespace.
  • To recover data lost after data definition language (DDL) operations that change the structure of tables. You cannot use Flashback Table to rewind a table to before the point of a structural change such as a truncate table operation.
  • To recover a table after it has been dropped with the PURGE option.
  • To recover from the logical corruption of a table.
  • To recover dropped tablespaces. In fact, RMAN can perform TSPITR on dropped tablespaces even when a recovery catalog is not used.


Source and More Info :- docs.oracle.com

Watch this Video for Step by Step Procedure for Performing RMAN Tablespace Point-in-Time Recovery (TSPITR).

RMAN has the ability to duplicate, or clone, a database from an active database. It is possible to create a duplicate database on a remote server with the same file structure, a remote server will a different file structure or the local server with a different file structure.

Most Trending