Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Oracle Data Warehousing, UNIX and large file-enabled file systems
I won't comment on how, or whether you platform can support files larger than 2GB, but as far as the number of files is concerned, the issue of checkpoints may be irrelevant. If your data has a strong time-element, then you may able able to work with time-based tablespaces, and then switch tablespaces older than a few weeks to readonly mode - at which point they are no longer subject to checkpoints and require no further backups.
-- Jonathan Lewis http://www.jlcomp.demon.co.uk Next Seminar - UK, April 3rd - 5th http://www.jlcomp.demon.co.uk/seminar.html Host to The Co-Operative Oracle Users' FAQ http://www.jlcomp.demon.co.uk/faq/ind_faq.html Author of: Practical Oracle 8i: Building Efficient Databases Don Gillespie wrote in message <6ffd83a6.0203251137.50307a5f_at_posting.google.com>...Received on Mon Mar 25 2002 - 15:21:24 CST
>I am the DBA for a data warehouse environment that is expected to get
>to about 3TB. That would mean about 1500 or so data files with the
>2GB file limit. Besides being a nightmare to manage that many files,
>I anticipate the overhead on checkpoints would be tremendous. The
>environment is 32-bit Oracle (possibility of 64 bit in the future) on
>AIX with disk storage on an IBM Shark SAN (RAID5, 32K stripe; no
>choice here), with a 16K Oracle block size (the max allowed). We are
>using Journaled File Systems, not raw partitions. I am contemplating
>the use of large file-enabled JFSs for all JFSs that would contain
>oracle data files, log files and control files. But I don't know much
>about them, and I am wondering if there are serious performance, space
>consumption or administration issues in doing so.
>
![]() |
![]() |