Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: ORACLE on Linux - IO bottleneck

Re: ORACLE on Linux - IO bottleneck

From: Eder San Millan <edersm_at_wanadoo.es>
Date: 15 Feb 2006 11:27:34 -0800
Message-ID: <1140031654.387600.172470@g14g2000cwa.googlegroups.com>

Noons ha escrito:

> Frank van Bortel wrote:
> >
> > Thanks for those links; I have nothing to add to this thread, but
> > follow it with all interest.
> > Looking forward to Nunos posting
> >
>
> Well, not a full posting. But I finally got elvtune to do
> anything other than noise-level changes!
>
> Conditions are:
>
> long batch db load job, mostly with very large indexed
> range scans. Index keys are very large (url text +
> search terms + ip addresses, timetamps, domains
> and so on). Rows are also very large, typically we store
> between 5-10 rows per 8K db block, sometimes much
> less depending on how big the strings are that we
> get from the search engines. 130 million rows, no
> partioning. 9.2.0.6, RHAS3upd.3. Oracle has been
> patched up with the "direct io on nfs" and "aio" patches.
> ext3, 4K block size and filesystemio_options=directIO
> in init.ora. O_DIRECT has been confirmed active with
> strace.
>
> iostat indicated a large number of read requests
> per second on the devices holding the indexes for this
> table. Number of reads was consistent with KB/s read
> speed. We were getting large queues for the devices
> (>300) and a large "usage" percentage (200%).
> Service time in the order of several tens of ms.
>
> elvtune showed r=2048, w=8192, b=6. Pretty standard
> for default Linux.
>
> My reasoning was: I know we are doing a lot of
> random, indexed io. Therefore, I will not be taking
> much advantage of io request elevator tuning as the
> chance of consecutive addresses is very remote. Other
> than the implicit two-fs-blocks-per-db-block (2*4K=8K).
>
> So I went radical:
>
> /sbin/elvtune -r 24 -w 24 -b 8 <device name>
>
> The motivation for the 24 was from some actual benchmark
> figures I got from a forum where Andrew and Andrea,
> two of the Linux folks invloved in elvtune coding, were arguing
> their reasons. 24 seemed to be a sweet spot for random access
> with ext3.
>
> It was in our case. The process handling insert-updates
> in this table dropped from 2 hours exec time to 1 hour.
> MB/s increased from about 5MB/s on each of the devices
> where the index is stored (3 of them, so 15MB/s aggregate)
> to 15MB/s (45MB/s aggregate). iostat queue lengths dropped
> to 10s and service time down to single figure ms.
> I didn't measure the device where the table is kept as the
> speed there has never been a problem. Other than the obvious
> high physical io because of large rows.
>
> Full table scans happening at other times in this table
> didn't suffer at all. dbfmbrc = 8 and is well within
> the 24 of r/w in elvtune, therefore it still benefits
> from streaming and the SAN read-ahead which kicks in
> after < 8 consecutive reads.
>
> Good enough improvement for me. Lessons: dont' assume
> defaults are also the best, measure, reason, change,
> measure again. Repeat until satisfied, then STOP!
>
> time for a beer

GREAT!!!!, Thanks Noons, I think this test will help me with the one I'd like to do ....

Thanks again, ¡very clear! Received on Wed Feb 15 2006 - 13:27:34 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US