Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Oracle Data Warehousing, UNIX and large file-enabled file systems

Re: Oracle Data Warehousing, UNIX and large file-enabled file systems

From: koert54 <koert54_at_nospam.com>
Date: Mon, 25 Mar 2002 21:10:11 GMT
Message-ID: <TmMn8.32037$DE4.4269@afrodite.telenet-ops.be>


> If you have a chance to re-examine your options vis a vis raw i/o versus
> JFS, I''d strongly suggest it.

With the choice between jfs/raw and large db's you should keep an eye on the max allowed
logical volumes or raw devices per volume group - if I'm not mistaken... on AIX the max is
256 logical volumes per volume group (I could be wrong here) ...

> JFS type systems seem to be an enormous overhead for massive read/ few
write
> systems, where the typical inputs are large batch feeds.

Correct - but then again - you should always take great care in planning your JFS logs aswell ... which is sometimes ignored, leaving one jfs log for a large number journaled filesystems...

Coming back on the opinion about many smaller db files against few large files ...
I can hardly imagine you choosing for 64GB datafiles unless you only have a few *really* large tablespaces ... with
smaller datafiles per tablespace on multiple SSA adapters & 8packs, PQ is definitely going to benifit...
On the other hand - if you use uniform sized locally managed tablespaces, you'll lose every first extent of each datafile ... example a 20GB locally managed tablespace (uniform sized extent 200Mb) with 2GB datafiles -> 10 datafiles x 200Mb -> you lost 2GB of space for actual data.

As for the IBM ESS Shark (make sure you get the fin ! :-) ) - the datapath optimizer using fiber channel adapters really rocks ! (it only took me 3hours to create 600GB worth of empty tablespaces using 16 parallel session !)
Unfortunately the Shark only supports JBODs and RAID5 ... so if you need RAID1 you'll have to take 2 8packs (preferable cross cluster) and mirror

them using lvm ....................................


"RSH" <RSH_Oracle_at_worldnet.att.net> wrote in message news:vlLn8.5643$se.561445_at_bgtnsc04-news.ops.worldnet.att.net...
> If you have a chance to re-examine your options vis a vis raw i/o versus
> JFS, I''d strongly suggest it.
>
> JFS type systems seem to be an enormous overhead for massive read/ few
write
> systems, where the typical inputs are large batch feeds.
>
> If you are trying to do simultaneous OLTP and LRQ/Data Warehouse stuff on
> the same box, well, good luck.
>
> RSH.
>
>
> "Don Gillespie" <don.gillespie_at_mts.mb.ca> wrote in message
> news:6ffd83a6.0203251137.50307a5f_at_posting.google.com...
> > I am the DBA for a data warehouse environment that is expected to get
> > to about 3TB. That would mean about 1500 or so data files with the
> > 2GB file limit. Besides being a nightmare to manage that many files,
> > I anticipate the overhead on checkpoints would be tremendous. The
> > environment is 32-bit Oracle (possibility of 64 bit in the future) on
> > AIX with disk storage on an IBM Shark SAN (RAID5, 32K stripe; no
> > choice here), with a 16K Oracle block size (the max allowed). We are
> > using Journaled File Systems, not raw partitions. I am contemplating
> > the use of large file-enabled JFSs for all JFSs that would contain
> > oracle data files, log files and control files. But I don't know much
> > about them, and I am wondering if there are serious performance, space
> > consumption or administration issues in doing so.
> >
> > I understand with large file-enabled JFSs, I could have 64GB files
> > (one posting says 32GB is max for Oracle) and up to 8TB JFSs. But I
> > have heard conflicting comments on implications for INODEs, allocation
> > units and fragmentation sizes(?). Am I going to suffer an I/O penalty
> > every time the database attempts to write blocks out to the files,
> > especially if the blocks are non-contiguous? What about disk space
> > usage? Any other issues (backup/recovery, RMAN, EXP/IMP, etc.)?
> >
> > I would be particularly interested in what others have experienced.
> > What file sizes did you allow? What size of JFSs were allowed? What
> > issues were encountered? If you had the chance to start from scratch
> > again, would you go large file-enabled again, and if so, what would
> > you set as your maximum file size and JFS size?
> >
> > These issues have come to the forefront as I prepare to create the
> > tablespaces, so unfortunately this is a time-sensitive issue. Any
> > input would be GREATLY appreciated!
>
>
Received on Mon Mar 25 2002 - 15:10:11 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US