Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Oracle Data Warehousing, UNIX and large file-enabled file systems

Re: Oracle Data Warehousing, UNIX and large file-enabled file systems

From: zongk Tu <jpseo_at_yahoo.com>
Date: Tue, 26 Mar 2002 03:02:41 GMT
Message-ID: <3C9E9640.8376FD6F@yahoo.com>


Don..

You should, if you can, get something like Veritas F/S... Then you would have the option of mounting the F/S bypassing the Kernel Buffer Cache...
Also true JFS handles inodes & extents dynamically, so this would virtually eliminate the System CPU overhead to maintain superblock, inodes..etc..Also, mounting F/S this way equals or betters Raw Logical volumes..As well, if you can change the stripe size to the Kernel value scsi_phys_max for your LUNs, I am sure that will help as well..and by turning on the large files option, you need not ever worry about the F/S fragmentation....
I am at an HP-UX shop hosting OLTP instances (JFS 3.3, 64-bit rdbms 8.1.6.2 &9.0.1.1,HP-UX 11.00 64-bit, XP-256 Disk Farms) and was able to prove all this..

HTH

--
Zongk Tu

Oracle Apps DBA/Unix SA

Don Gillespie wrote:


> I am the DBA for a data warehouse environment that is expected to get
> to about 3TB. That would mean about 1500 or so data files with the
> 2GB file limit. Besides being a nightmare to manage that many files,
> I anticipate the overhead on checkpoints would be tremendous. The
> environment is 32-bit Oracle (possibility of 64 bit in the future) on
> AIX with disk storage on an IBM Shark SAN (RAID5, 32K stripe; no
> choice here), with a 16K Oracle block size (the max allowed). We are
> using Journaled File Systems, not raw partitions. I am contemplating
> the use of large file-enabled JFSs for all JFSs that would contain
> oracle data files, log files and control files. But I don't know much
> about them, and I am wondering if there are serious performance, space
> consumption or administration issues in doing so.
>
> I understand with large file-enabled JFSs, I could have 64GB files
> (one posting says 32GB is max for Oracle) and up to 8TB JFSs. But I
> have heard conflicting comments on implications for INODEs, allocation
> units and fragmentation sizes(?). Am I going to suffer an I/O penalty
> every time the database attempts to write blocks out to the files,
> especially if the blocks are non-contiguous? What about disk space
> usage? Any other issues (backup/recovery, RMAN, EXP/IMP, etc.)?
>
> I would be particularly interested in what others have experienced.
> What file sizes did you allow? What size of JFSs were allowed? What
> issues were encountered? If you had the chance to start from scratch
> again, would you go large file-enabled again, and if so, what would
> you set as your maximum file size and JFS size?
>
> These issues have come to the forefront as I prepare to create the
> tablespaces, so unfortunately this is a time-sensitive issue. Any
> input would be GREATLY appreciated!
-- Zongk Tu Oracle Apps DBA/Unix SA
Received on Mon Mar 25 2002 - 21:02:41 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US