Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> RE: (Fwd) Re: (Fwd/Oracle) Does NT write to random locations on d

RE: (Fwd) Re: (Fwd/Oracle) Does NT write to random locations on d

From: Mohan, Ross <>
Date: Mon, 12 Mar 2001 15:11:08 -0800
Message-ID: <>

KK - I
definitely agree w/your statement. I'd take computer number one,
too! And, I did not have the resources <SPAN class=683552622-12032001>to dig into it more....i am
like me, you'd be want to if you could....:) <FONT face=Arial color=#0000ff

<FONT face="Times New Roman"

  size=2>-----Original Message-----From: Kevin Kostyszyn   []Sent: Monday, March 12, 2001 4:56   PMTo: Multiple recipients of list ORACLE-LSubject: RE:   (Fwd) Re: (Fwd/Oracle) Does NT write to random locations on   d
  don't know Ross, I didn't want to dig to deep into it.  The way I see it   is as follows, "hey, that computer is a dual 550 with 10k rpm scsi drives and   a gig of ram.  I betcha it would be faster than that piii600 with ide   drives" 
      <FONT face=Arial
  color=#0000ff size=2>I know it's simple minded, but it's kind of   true:)
<FONT face=Arial color=#0000ff


    <FONT face=Tahoma
    size=2>-----Original Message-----From:     []On Behalf Of Mohan, RossSent:     Monday, March 12, 2001 4:17 PMTo: Multiple recipients of list     ORACLE-LSubject: RE: (Fwd) Re: (Fwd/Oracle) Does NT write to     random locations on d
    I don't see the logic in the last post: "You can't have fast     and best."
    First, he doesn't define terms. "Fast"?  Is that peak     I/O? Streaming I/O? Single block read? Seek time?     Write time? Come on, trying to reduce this to an     undifferentiated "fast" or "slow" verges on the useless unless one     takes the effort to provide an EXPLICITLY CITED METRIC for     speed. And this fellow didn't.
    Second, it's confusing: why is "fast" set against "best" as     though the one is somehow the enemy of the other?     Huh?
    Third, it leaves out any discussion of the effect of on-disk     and on-controller cache. (Not to mention     system-level cache.)  As far as the application is <FONT     size=2>concerned, it does not see the "disk" sees controller and     disk cache and disk as an amalgam,
    Fourth, since WHEN did the choice become forced into "Do you     want a fast hard disk array with lots of fragments,     or a slow disk array with minimal fragments?" Geez,     can I have a slow disk array with lots of fragments?     The only statement I agree with, either logically or from     experience is the bit about OS vendors keeping a bit     secret from the world on their...well, "secret sauce". <FONT     size=2>Sure, you can keep a little bit secret, but come on, folks, it's not     like MS has any other/better/special MoJo than any     other vendor. What? when the aliens landed on the     Redmond campus and revealed their special VASTLY SUPERIOR alien OS     technology, no one else noticed?
    The fact is, data access through a system access     through a system. The *whole* system -- including     caches -- counts. And, logic will tell you that long-stride <FONT     size=2>streaming I/O ( think Oracle Video Server, e.g. ) will work FASTER     and therefore BETTER on a DEFRAGMENTED disk.     (geez)
    I guess this one needs someone who really cares enough to     actually test it.

    The OS does decide where to put files based on its own     algorythms.  This is a big secret with NT (it     is part of their "Intelligence")  All OSes have     some form of system for writing data optimally to a <FONT     size=2>disk or drive array. They may give you bits and pieces of how it     works, but the details will remain MS     confidential. 
    There are a couple of industry wide accepted examples with     no heuristics or intelligence built in.      

    Generally there is a big tradeoff in optimizing writes and     reads on a hard disk.  The more time which is     spent in figuring out where something goes, the     slower disk access is.  Do you want a fast hard <FONT     size=2>disk array with lots of fragments, or a slow disk array with minimal     fragments?  The choice is yours, you can't have     both fast and best. 
    -----Original Message----- Sent:
    Friday, March 09, 2001 1:19 PM To: Levinson,     Eric
    Until a few days ago I would have agreed entirely with what     you've said.  However, about a week or so back,     I ran into a problem with a disk that was so badly     fragmented that Drive Image couldn't create an image of <FONT     size=2>it.  I ran Diskkeeper on the drive, in fact several     passes.  At least 3 of them were after I     removed ~ half of the files, so that I had around 4Gb <FONT     size=2>free on this 8Gb drive.  The fragmentation improved only very     slightly.  Several files had in excess of 100     fragments.  Since I was preparing for a machine     upgrade anyway, I copied all the files off to <FONT     size=2>another location,  formatted the drive, then restored the files     via xcopy.  Much to my surprise, while the     fragmentation was much less, I had several large     files that still were badly fragmented.  In fact the worst     offender was a 100Mb file which still consisted of     123 fragments.  I'm not attempting to disprove     your thinking here, but I'm curious if you have any <FONT     size=2>thoughts on possible reasons for this anomaly?     At 12:16 PM 3/9/2001 -0800, you wrote: <FONT     size=2>>Yes, file fragmentation is a big issue for products that run out     of the OS. >Oracle is kind of an exception,     meaning the files it creates it manages. >Yes,     your database files may be fragmented, but it probably doesn't affect     >your database speed as much as tablespace fragmentation     would, so I would >just ignore it.  Oracle     manages how the database files are used pretty <FONT     size=2>>efficiently, even if they are fragmented. <FONT     size=2>> >If you REALLY wanted to defragment     your database files, there is a really >easy     way.  Most online defrag utilities (like diskkeeper) simply copies     the >file to another location on the disk, hoping     it will reduce the number of >fragments.  On     a nearly full disk it will _increase_ the number of <FONT     size=2>>fragments, so this won't work. <FONT     size=2>> >Best thing would be to back up all     your database files to tape a few times. >     >Delete all the database files from your disks.      Also REMOVE all non >essential files like the     contents of your temp directory, IE cache, etc. <FONT     size=2>> >Defrag your hard disk (if the option     is available in your defragger, choose >"Free     Space Defragmentation") > <FONT
    size=2>>Restore your database files to your hard disk.  These files     should be >written to your hard disk in a     contiguous fashion by default, if you have a <FONT     size=2>>hunk of open space on your drives. <FONT     size=2>> >Another option would be to use a Raw     file system for Oracle.  I am not sure >if     they support this on NT, I know Oracle supports this on Sun,     basically >you don't put a file system on the     drive.  You give to Oracle partitions, >and     it manages everything. > <FONT
    size=2>>Hope this helps! >Eric
    > > <FONT
    size=2>>-----Original Message----- >From: Mike     Soultanian [<A
    href="mailto:msoultan_at_CSULB.EDU">mailto:msoultan_at_CSULB.EDU]     >Sent: Friday, March 09, 2001 12:01 PM <FONT

    size=2>>Subject: Re: (Fwd/Oracle) Does NT write to random locations on 
    disk? > > 

    >I don't know the answer to your question, but if you     didn't already >know, there is diskeeper for     NT.  Plus, they have a frag guard thing <FONT     size=2>>that will defrag on the fly, or something like that.  I     haven't tried >it, I just get their newsletter     all the time :) > <FONT
    size=2>>Later, >mike <FONT 
    size=2>> >"Eric D. Pierce" wrote: 
    > > > > ------- 

    Forwarded message follows ------- > > Date     sent:             
    Fri, 09 Mar 2001 11:00:31 -0800 > >

    Multiple recipients of list ORACLE-L <FONT     size=2>><> > >

    "Boivin, Patrice J" <> <FONT     size=2>> >
    Does NT write to random locations on disk? >     > > > Using a little utility called contig     I noticed that the Oracle 8.1.6 > > datafiles     on my test NT server are quite fragmented, an average of 177     > > fragments per file, 118 fragments for the OEM     repository datafile.  The >poor
    > > utility couldn't do anything with the database     files, they are too large > > perhaps.     > > > > These were
    created on an empty server, 8i release 2 went on it after a <FONT     size=2>> > defrag, then the OEM.  This is on a hard disk with     1.2G of free space, >none <FONT
    size=2>> > of the datafiles come close to that. <FONT     size=2>> > > > Why so many
    fragments?  Oracle created those files in one pass, does NT     > > write randomly to disk or what? <FONT     size=2>> > > > Won't this have an impact     on my NT database's performance? > >     > > Oracle says tablespace fragmentation is not a big     deal, but fragmentation <FONT
    size=2>>at > > the OS level
    matters.   Supposedly that's why NT and WndowsXX came with     > > defragmentation tools. <FONT
    size=2>> > > > ??? <FONT
    size=2>> > > > Is there a registry
    setting somewhere to tell NT to write contiguously <FONT     size=2>to > > disk?
    > <FONT
    >The WINNT-L list is hosted on a Windows NT(TM) machine     running L-Soft >international's LISTSERV(R)     software.  For subscription/signoff info <FONT     size=2>>and archives, see <A target=_blank     href="">     . Received on Mon Mar 12 2001 - 17:11:08 CST

Original text of this message