Re: 20 TB Bigfile

From: Stefan Koehler <contact_at_soocs.de>
Date: Tue, 10 Mar 2015 19:49:25 +0100 (CET)
Message-ID: <211172076.92790.1426013365254.JavaMail.open-xchange_at_app10.ox.hosteurope.de>


    
 
  
 
 
 
  
Hi Keith,
unfortunately you have not mentioned the database version. However as this database is related to SAP, bigfile tablespaces are supported with  11.2.0.2 (or newer) and BR*Tools 7.20 Patch Level 20 (or newer). Please check SAPnote #1644762 for more details.  The max size limit with 1023 data files would be 32 TB with the Oracle/SAP 8 kb requirement - so you still have plenty of space to go in your scenario. 
 
You have to export/import with R3load (and/or Distribution Monitor) in the SAP standard scenario as you migrate from Solaris (assuming SPARC and Big Endian) to Linux (Little Endian). In this scenario it is very easy to split the objects into several tablespaces due to modifications in R3load files.
 
However the backup & restore scenario is critical, if you still want to go with bigfile tablespaces. You can use RMAN (or have to in case of ASM) with multi section backups to parallelize the backups.
 
Best Regards
Stefan Koehler

Freelance Oracle performance consultant and researcher
Homepage: http://www.soocs.de
Twitter: _at_OracleSK

> Keith Moore <kmoore_at_zephyrus.com> hat am 10. März 2015 um 13:23 geschrieben:
>
> We have a client who is moving a SAP database from Solaris (non-ASM) to Linux
> (ASM). This database is around 21 TB but 20 TB of that is a single tablespace
> with 1023 data files (yes, the maximum limit).
>
> On the new architecture we are considering using a single 20 TB Bigfile
> tablespace. Does anyone know of any negatives to doing that? Bugs?
> Performance? Other?
>
> Moving the data into multiple tablespaces will not be an option.
>
> Thanks
> Keith Moore
-- http://www.freelists.org/webpage/oracle-l Received on Tue Mar 10 2015 - 19:49:25 CET

Original text of this message