Re: big table headache

From: Ronald J.H. Rood <ronr_at_my-deja.com>
Date: Thu, 11 Jan 2001 08:55:41 GMT
Message-ID: <93jseb$jnt$1_at_nnrp1.deja.com>


In article <93jp96$g9g$1_at_nnrp1.deja.com>,   susana73_at_hotmail.com wrote:
> Hi all,
>
> I have a database consists of 30 tables around 500GB. 29 of them are
 a
> total of only 20 GB. There is a single table 'arch_struct' occupying
> almost 500GB alone. This is a table with blob and store the old
 record
> from the table 'struct'. Records got archived from 'struct' to
> 'arch_struct' every week. 'arch_struct' does get access(both read and
> write) but infrequently.
>
> This gives me a lot of trouble in terms of backup and maintanence. It
> takes me a long time to backup, especially when I think about I only
> have 20GB regular data. I also found that Oracle does not reuse the
> deleted space efficently. I want to defragment it but I don't have
> 500GB space to dump it.

Hi Susan

what oracle version are you using ?
Could partitioning be an option for you ? With a little luck you can place partitions in a readonly tablespace. Ofcourse this only works if your app only updates the read/write partition[s] that you hope to have the least of.

An other option is to use RMAN for backup/recovery and use the blocklevel incremental backups.

without the space to place a copy you have a problem. Maybe you can dump it to tape[s].

Ronald
http://ronr.nl/unix-dba
The best way to accellerate a computer 'running' windows is by gravitation

Sent via Deja.com
http://www.deja.com/ Received on Thu Jan 11 2001 - 09:55:41 CET

Original text of this message