Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: 9i multi cache buffer

Re: 9i multi cache buffer

From: Geoff Ingram <geoff_at_dbcool.com>
Date: 15 Nov 2002 01:40:33 -0800
Message-ID: <84d1c3a9.0211150140.31bd2f6e@posting.google.com>


"Howard J. Rogers" <howardjr2000_at_yahoo.com.au> wrote in message news:<hjQA9.76549$g9.215982_at_newsfeeds.bigpond.com>...
> "Geoff Ingram" <geoff_at_dbcool.com> wrote in message
> >
> > Here's a different view.
> >
>
> Uh huh. Different, in this case, means wrong.
>
> > Oracle announced a record breaking 4CPU TCP-C (Linux and HPUX,
> > identical Intel hardware, Oracle10) to coincide with OracleWorld.
>
> You believe all this marketing stuff? Really??
>

The results are fact (actually still under review) and speak for themselves.
Do i "believe" Linux/Intel/10i is a great platform for running Oracle in terms of performance and value for money running all kinds of Oracle apps?

100% yes. I'll go further. I believe its the Oracle architecture of the future.

TPC-C gives it a certain credibility.

> >This
> > was achieved using multiple buffer caches:
> >
> > db_cache_size = 3500M
> > db_8k_cache_size = 512M
> > db_16k_cache_size = 3000M
> > db_keep_cache_size = 37750M
>
> Just hold on right there. In this "real world" test, it would appear we have
> getting on for 43 GIGAbytes of RAM in use. Mmmmm. This is not going to be
> telling us an awful lot that is of any relevance.

??? Show me any Oracle app that doesn't want to keep as much data cached as possible. What's wrong with that? If you had $400K to spend, wouldn't you do the same to get best performance. As you probably don't have $400K, you would use a smaller cache.

Why are you overlooking the other caches:

db_8k_cache_size = 512M
db_16k_cache_size = 3000M

Either these are wasted (your view?) or they benefit performance.

>
> > db_recycle_cache_size = 50M
> >
> > TPC-C is an extreme OLTP workload characterised by small transactions
> > and concurrent block access by multiple sessions. In this case a small
> > blocksize is good - evidenced by the fact the TPC-C database uses
> > db_block_size=2048.
> >
> > Smaller block size means latches for block access (data and index) are
> > held for shorter time, increasing concurrency.
>
> Of course the issue in OLTP environments, even "extreme" ones like this
> fairy-land of a test you mention, is concurrent access to data by many
> users. And the larger your blocks, the more of a concurrency problem you
> have.
>

unlike a lot of discussion and conjecture on the web - like this discussion we're having which at this point contains no quantitative analysis as back up - TPC-C requires a full public disclosure of all code/hardware/costs. It's also audited. Check it out on the web.

As i said, TPC-C is an "extreme OLTP workload". Most apps aren't. No-one said they are. That doesn't mean DBAs/businesses can't glean something useful from the TPC-C results.

TPC-C is definitely an excellent tutorial for DBAs who want to understand how to get max OLTP throughput from Oracle. Everything in TPC-C done with that in mind.

Of course that's not the only goal in the real world which is just as well.
You probably wouldn't choose redo logs that are several GB in size. But you would if you wanted to avoid a checkpoint during your sampled interval.
TPC-C is honest about its purpose. The spec is published.

It's disappointing that you consider that to be "fairy-land", considering some of the very best Oracle performance engineers work on that stuff, each TPC-C result takes several months end-to-end, and typically costs OEMs > $1million. Personally, i've learnt a huge amount from reading about TPC.

No benchmark is perfect - but TPC-C goals are up front, published, and don't pretend to be what they aren't. As an alternative, you might want to check out the open source orabm benchmark on my website (http://www.dbcool.com). Unlike TPC-C, orabm is an open-source extreme Oracle CPU stress test. I believe it provides useful results to help towards choosing your Oracle platform, within the scope of its stated goals. It also stresses the SGA memory.

> If I have a headache, however, any suggestion that I cure it by chopping off
> my head is going to be effective, but a tad drastic. Likewise, curing the
> concurrency issue by picking a small block size is a ludicrous response that
> only (a) a bad DBA or (b) a marketing department in a hurry would do. We
> have PCTFREE, INITRANS and MAXTRANS to address concurrency issues in a
> rather more thoughtful and skilful way.
>
> >For other workloads
> > (e.g DSS) larger block sizes may better suit the usage profile of the
> > data.
> > Same for UNDO, for which Oracle used 8K blocks in the benchmark.
> > What about a situation where your table has large rows (say 16K each)
> > with small index entries (say a numeric primary key)? Do you really
> > want them both stored in blocks of the same size?
> >
>
> Yup. The more index entries I can pack in to a block, the smaller my index
> will be. Small indexes mean faster reading, and are intrinsically more
> likely to be picked by the optimizer in the first place.
>
> > Multiple buffer caches can significantly improve performance in the
> > right circumstances.

i stand by that. Not just for TPC-C.

>
> TPC benchmarks prove nothing.
>

Totally don't agree. see above

> Multiple buffer caches ignore the fact that your database does not live in a
> vaccuum, but on a FILE SYSTEM. And File Systems have buffers. And any block
> size which doesn't match the buffer size is going to induce poor I/O.
>

This description leaves out some key info. A "FILE SYSTEM" on most modern OS uses all available free memory to cache file blocks in memory (Solaris and Linux to name but two) - unless you use VERITAS or other specialist filesystems that allow you to turn the caching off, or raw. Performance is further enhanced by async I/O on most architectures these days.

If you're on Solaris the excellent (free) memtool can show you exactly how much of your Oracle files are cached in the filesystem.

What you describe as a physical I/O from disk may not be - it may be a memory access. Even if the OS requests a physical I/O - because a block isn't in the FILE SYSTEM cache - most modern storage arrays have large front end caches. So even an OS physical I/O - when a block isn't in the file system cache - may be a memory access from the storage cache.

Net result: the DBA obsession with blocksize/buffersize resulting in poor I/O is a myth a lot of the time, especially for storage used by most large commercial organizations these days.

> I didn't make any of that up, and you are of course entirely free to ignore
> it.
>

Thanks for permission. I listen to all views - then check it out for myself and get some numbers - make decisions based on "facts not fairyland" if you like :-)

In my experience, the view you subscribe to no longer applies to modern storage architectures.

Don't believe? Try this test. Run table scan on a large table (uncached in Oracle) that fits within physical memory on your machine - using a table stored on a filesystem.

  1. Measure the Oracle physical block reads and elapsed time
  2. run it again immediately. Oracle physical reads are the same as 1), elapsed time is less.
  3. leave 30 mins (assuming your Oracle system is busy). Oracle physical reads are the same as 1). Elapsed time is up again.

Now ask yourself

why is 1) slower than 2). Why is 2) faster than 3).

> >Only you can tell whether they apply to your
> > situation.
> >
>

> That's the 'all morally equivalent' school of thought. Not in this case.
> There is a right and wrong answer.
>

It's pure science - morals don't come into it. Unfortunately, religious type discussions in the world of Oracle continue to perpetuate a lot of myths. The old raw/fs and database-defrag come to mind.

There is a place for multiple buffer caches in some cases. If you see buffer contention don't rule it out. Keep an open mind.

For TPC-C - an extreme OLTP application with the goal of getting the highest tpmC number possible - it can improve throughput. All DBAs can learn something from TPC-C benchmarks. Dissing it all as marketing hype is as extreme as basing everthing you do on TPC-C results. I fit in the grey area between them.

> HJR
>
>
>
> > Geoff Ingram
> >
> > http://www.dbcool.com The all-free no-timeout Oracle tuning tool
> >
> > Author "High Performance Oracle"

regards Geoff Ingram Received on Fri Nov 15 2002 - 03:40:33 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US