Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Direct I/O on vxfs on EMC

Re: Direct I/O on vxfs on EMC

From: Paul Drake <paled_at_home.com>
Date: Tue, 21 Aug 2001 04:22:47 GMT
Message-ID: <3B81E1F7.F2C3938C@home.com>


Connor McDonald wrote:
>
> Just curious if anyone's come across this before. Got an
> email from a friend who's just put an (8i) database
> with everything (excepting the redo logs) on one large
> vxfs stripe on top of EMC disks (is standard raid-1
> config). The redos are on separate raid-1 (vxfs on
> EMC) disks.
>
> When the file systems are mounted for direct I/O - all
> the machine level stats (sar/iostat/vxstat) look fine
> but the users are up in arms about performance. The
> only sign of problems is that fibre interface to the
> disks is showing 1500 io's/sec.
>
> Mount the file systems as per normal (ie without
> direct io), and users are much happier, the fibre
> shows 120 io's/sec.
>
> Unfortunately I've just finished at the site - so I
> can't run any diagnostics. I'm guessing that maybe
> their Oracle buffer cache is way undersized, or their
> is some problems with the direct IO on EMC disks (or
> the EMC cache), but I'd appreciate any other theories...
>
> Cheers
> Connor
>

Connor,

I nothing about EMC and vxfs - but it seems likely that you're simply grabbing much a larger number of blocks each time. What are the multiblock_io settings in the init.ora?

It may be that you're attempting to get a block at a time on index lookups followed by a single block on the fetch from the data file when not running direct_io. Maybe you're having lots of consistent read fetches that are causing lots of reads on RBS - again, a multiple of the db_block_size on the order of 64 rather than 8 blocks.

just a hunch.

Paul Received on Mon Aug 20 2001 - 23:22:47 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US