Re: 20 TB Bigfile

From: Keith Moore <kmoore_at_zephyrus.com>
Date: Tue, 10 Mar 2015 14:26:47 -0500
Message-ID: <b2e05c4329167c4d9b2afd02e371d23d.squirrel_at_lady.zephyrus.com>



Hi,

Sorry, I meant to include the version. It is version 11.2.0.3.6.

The strategy is to never restore from backups except as a last resort. We will failover to the DR database if possible. If not, then restore from a storage level snapshot of production and archive logs.

If the failure scenario is such that neither of those is feasible, we would restore from the RMAN backup. In that case, a full restore will be slow whether it's a single big file or many smaller files. The only case where it would make a difference is if we had a small file tablespace and only needed to restore a single data file.

Keith

> Hi Keith, unfortunately you have not mentioned the
> database version. However as this database is related to SAP,
> bigfile tablespaces are supported with&#160; 11.2.0.2 (or newer)
> and BR*Tools&#160;7.20 Patch Level 20 (or newer). Please check
> SAPnote #1644762 for more details.&#160; The max size limit with
> 1023 data files would be&#160;32 TB with the Oracle/SAP 8 kb
> requirement - so you still have plenty of space to go in your
> scenario.&#160; &#160; You have to export/import
> with R3load (and/or Distribution Monitor) in the SAP standard
> scenario as you migrate from Solaris (assuming SPARC and Big Endian)
> to Linux (Little Endian). In this scenario it is very easy to split
> the objects into several tablespaces due to modifications in R3load
> files. &#160; However the backup &#38; restore
> scenario is critical, if you still want to go with bigfile
> tablespaces. You can use RMAN (or have to in case of ASM)
> with&#160;multi section backups to parallelize the backups.
> &#160; Best Regards
> Stefan Koehler
>
> Freelance Oracle performance consultant and researcher
> Homepage: http://www.soocs.de
> Twitter: _at_OracleSK
> &#62; Keith Moore &#60;kmoore_at_zephyrus.com&#62; hat am 10. M&#228;rz 2015 um
> 13:23 geschrieben:
> &#62;
> &#62; We have a client who is moving a SAP database from Solaris (non-ASM) to
> Linux
> &#62; (ASM). This database is around 21 TB but 20 TB of that is a single
> tablespace
> &#62; with 1023 data files (yes, the maximum limit).
> &#62;
> &#62; On the new architecture we are considering using a single 20 TB Bigfile
> &#62; tablespace. Does anyone know of any negatives to doing that? Bugs?
> &#62; Performance? Other?
> &#62;
> &#62; Moving the data into multiple tablespaces will not be an option.
> &#62;
> &#62; Thanks
> &#62; Keith Moore -- http://www.freelists.org/webpage/oracle-l

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Mar 10 2015 - 20:26:47 CET

Original text of this message