Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Oracle Data Warehousing, UNIX and large file-enabled file systems

Re: Oracle Data Warehousing, UNIX and large file-enabled file systems

From: RSH <RSH_Oracle_at_worldnet.att.net>
Date: Mon, 25 Mar 2002 20:00:27 GMT
Message-ID: <vlLn8.5643$se.561445@bgtnsc04-news.ops.worldnet.att.net>


If you have a chance to re-examine your options vis a vis raw i/o versus JFS, I''d strongly suggest it.

JFS type systems seem to be an enormous overhead for massive read/ few write systems, where the typical inputs are large batch feeds.

If you are trying to do simultaneous OLTP and LRQ/Data Warehouse stuff on the same box, well, good luck.

RSH. "Don Gillespie" <don.gillespie_at_mts.mb.ca> wrote in message news:6ffd83a6.0203251137.50307a5f_at_posting.google.com...
> I am the DBA for a data warehouse environment that is expected to get
> to about 3TB. That would mean about 1500 or so data files with the
> 2GB file limit. Besides being a nightmare to manage that many files,
> I anticipate the overhead on checkpoints would be tremendous. The
> environment is 32-bit Oracle (possibility of 64 bit in the future) on
> AIX with disk storage on an IBM Shark SAN (RAID5, 32K stripe; no
> choice here), with a 16K Oracle block size (the max allowed). We are
> using Journaled File Systems, not raw partitions. I am contemplating
> the use of large file-enabled JFSs for all JFSs that would contain
> oracle data files, log files and control files. But I don't know much
> about them, and I am wondering if there are serious performance, space
> consumption or administration issues in doing so.
>
> I understand with large file-enabled JFSs, I could have 64GB files
> (one posting says 32GB is max for Oracle) and up to 8TB JFSs. But I
> have heard conflicting comments on implications for INODEs, allocation
> units and fragmentation sizes(?). Am I going to suffer an I/O penalty
> every time the database attempts to write blocks out to the files,
> especially if the blocks are non-contiguous? What about disk space
> usage? Any other issues (backup/recovery, RMAN, EXP/IMP, etc.)?
>
> I would be particularly interested in what others have experienced.
> What file sizes did you allow? What size of JFSs were allowed? What
> issues were encountered? If you had the chance to start from scratch
> again, would you go large file-enabled again, and if so, what would
> you set as your maximum file size and JFS size?
>
> These issues have come to the forefront as I prepare to create the
> tablespaces, so unfortunately this is a time-sensitive issue. Any
> input would be GREATLY appreciated!
Received on Mon Mar 25 2002 - 14:00:27 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US