Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: separate data/inidex

Re: separate data/inidex

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Mon, 6 May 2002 12:10:38 +0100
Message-ID: <1020683797.19441.0.nnrp-08.9e984b29@news.demon.co.uk>

MFT - is a TLA which is a DKI for me.

Generic Unix has (had ?) a similar strategy though.

makefs assumes that you will have lots of small files (average 10K I think) and then tries to avoid using more than X% (50 I think) of a track when creating a file on an empty device. Of course it was adjustable if you knew the three dozen or so parameters to makefs, and nowadays you don't see it.

Even with 'file 1 interleaved with file 2' should the performance impact be so great ? Presumably the chunks used by NT are a reasonable size on a clean file system ?

I guess if your Oracle blocksize were larger than the O/S block size you could find that one split Oracle block in 32 (say) could have quite an impact on performance.

But how much of the performance impact might be due to NT systems being used in a somewhat cavalier fashion - lots of little files get created and dropped over time leaving lots of file fragments, and then Oracle files get extended, moved, recreated in the gaps. As per your comments about sharing partitions with other folders and moving big data files.

In fact, given that a large fraction of I/O is single (Oracle) block - why should it matter if the underlying O/S has scattered its blocks completely randomly - (assuming the Oracle and O/S blocks align ?) (After all, how many people now agree "one extent isn't necessary" about data segments - the same arguments seem to make sense at the file system level).

--
Jonathan Lewis
http://www.jlcomp.demon.co.uk

Author of:
Practical Oracle 8i: Building Efficient Databases

Next Seminar - Australia - July/August
http://www.jlcomp.demon.co.uk/seminar.html

Host to The Co-Operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/faq/ind_faq.html



Nuno Souto wrote in message
<3cd51796$0$15478$afc38c87_at_news.optusnet.com.au>...

>
>- You create your initial files. NTFS will still fragment them around
>the MFT even though you are working off a pristine freshly made
>partition. This because NTFS is completely incapable of creating a
>contiguous file even when you specify the final size and a single
>allocation. In fact it uses a strange algorithm to attempt to
>"optimally" place the file by fragmenting it around the MFT. Even though
>you know your placement was optimal.
>

>- And so on, chain reaction. With half a dozen big files, your data file
>fragments will be all over the place. Run a good defragger and it will
>show this problem even on a freshly made disk. Mix up a few other
>folders with programs or software and you're virtually guaranteed a well
>fragmented disk. All in the interest of optimisation.
>
>Best thing to do? Create all your data containers and any other folders
>on the disk. Then defrag.
>
>Do this also anytime you have to drop/create/re-create any big data file
>on that partition (like when reorg/recovery/adding more disk space), or
>add/remove folders and their contents. All of these fragment any new
>stuff as well.
>
Received on Mon May 06 2002 - 06:10:38 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US