Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: 3GB RAM usage by Oracle

Re: 3GB RAM usage by Oracle

From: Richard Foote <richard.foote_at_bigpond.nospam.com>
Date: Mon, 30 Aug 2004 12:31:59 GMT
Message-ID: <37FYc.13420$D7.8054@news-server.bigpond.net.au>


Hi Don,

Comments embedded. However something I meant to say earlier is that it's nice to have a discussion with you on technical matters. Although I don't agree with you on many things, I do appreciate the fact your willing to discuss it.

"Don Burleson" <don_at_burleson.cc> wrote in message news:998d28f7.0408290740.6828c89a_at_posting.google.com...
> > - Do you really think Don can upgrade CPUs in 15 mins. 15 mins to
> > order/purchase, have delivered, installed, tested, rebooted ... Somehow
I
> > think not.
>
> Yes, I'm not allowed to touch hardware after I spilled coffee in the
> server! But I know how to call the vendor. But you're right, it was
> actually more like 10 minutes downtime.

Oh, so your talking "downtime". Does that mean to fix it the "proper" way it'll require 6 weeks downtime as well ? If you're going to compare times, especially if you're going to document timings to your clients, it might be an idea to identify that one set of timings is for "downtime" and the other is "elapsed". Otherwise, you're not comparing apples with similar fruit. So how much downtime does it take to fix the SQL ? And by that I mean fix the SQL that is *actually* impacting the performance of the (important) applications, not just the SQL that *could* be bad.

>
> > - Do you really think Don diagnosed and identified *2000* "REALLY bad
SQL"
> > (Don's quote) statements ? Somehow I think not ...
>
> Easy really, just join v$sql_plan into v$sql and toss-in a WHERE
> clause.
>
> Mike Ault sells automated script for this, and it's really easy.
>
> Check it out:
>
> http://www.rampant-books.com/download_adv_mon_tuning.htm

Umm, it's a book. And it's one of Mike's books to boot so I won't comment on the contents. However note that any script that "generalises" and makes assumptions of bad vs good SQL is likely to erroneous. What you're suggesting is that you *think* there's 2000 bad SQL statements but you really don't know and you really don't know which "bad" ones are really relevant and which ones aren't.

What SQL statements were actually *impacting response times*, responses of transactions important to the business. It's these (potentially few) SQL statements that need to be the focus, not some database generalisation.

Also, you mentioned earlier in the thread that "Believe it or not, most USA shops already have tuned their SQL and their high physical I/O is due to not enough data cache". So what's the use of Mike's wonder scripts in the USA where untuned SQL is not the problem ?

If you want to read a book that *really* discusses how to find problems in a database and how to *really* solve them, then I suggest you read Cary Millsaps book. Several times. That Don *is* a good book.

>
> > - Do you really think that a CPU upgrade can magically convert a system
with
> > dire and urgent performance problems due to *2000* "REALLY bad SQL"
> > statements into a lean, mean fighting machine ? Somehow I think not ...
>
> Sadly, it happens every day, crappy code executing 100x faster.

So you're saying that a 100s multi FTS , poorly joined query now runs in 1s after your CPU upgrade ? Have you any evidence or statistics or measurements or an example to explain how you achieved this ?

>
>
> > - Do you really think a *single* "REALLY bad SQL" statement that for
example
> > performs an unnecessary FTS and reads say 100,000 blocks unnecessarily
> > (rather than the 4 blocks required) will magically run efficiently and
> > effectively with a faster CPU ? Then multiply by *2000*. Somehow I think
not
>
> I did't say "efficiently" or "effectively", I said FASTER!
>
> WAY FASTER!
Like 100 times faster ? Consistently ? So what were the before and after CPU configurations on what platform ? Honestly ?

>
> > BTW, still searching, searching in my quest to find all these "not
uncommon"
> > 30G buffer cache sites ...
>
> Just read the Oracle10g TPCC benchmarks, Richard. They all use over
> 30 gig. There are some good examples on Oracle-L too!

TPCC benchmarks I know about. It's the *real* world that interests me and believe me, there is a big difference.

What examples on Oracle-L ? There have been a few posts and I've received a number of emails privately but 10G has been the largest anyone has admitted to. The next largest is 3G ? Where are all these 30G sites you've mentioned that are not so uncommon ?

>
> Mike Ault just finished a book with a complete TPCH benchmark and I
> have reproducable timings showing measurable benefits to really big
> cache sizes on 10g:
>
> http://www.rampant-books.com/book_2005_1_ssd.htm

Again it's one of Mike books and I don't buy Mike's books. And again it's artificial benchmarks and test data and not real world examples. I just don't see much evidence of actual sites with 30G buffer caches running around all over the place.

>
> If I see any other specific example, I'll let you know. . . .

I thought you had many examples so if you can provide any real world ones, I would be interested.

However, I again go back to my initial point.

At the start of this thread, you recommended setting the buffer cache to be as large a size as possible so as not to "waste RAM". You have 2 choices you said, 4GT or AWE. Go for it. Even though the OP clearly said he had already tuned his problems away.

I say such advice is *wrong* and *dangerous*. More memory is not necessarily good. Using AWE has serve overheads and restrictions. Adding more memory may not necessarily make things runs faster or better. In fact under many scenarios, adding more memory, *especially using AWE* can *hurt* performance. I say tune and allocate an *appropriate* amount of memory depending on your environment and applications and "appropriate" does not necessarily mean all of it. "Wasting" RAM, especially in 32bit windows, it not necessarily a bad thing.

You gave no such warnings or suggestions in your advice. None.

I also say that adding hardware to address a problem is in most cases the entirely wrong thing to do. I also say that adding hardware in most cases may not even achieve the goal of having acceptable performance. I say in most cases, it's the uneconomic way, especially in the longer term (and heck in the shorter term too).

And that dear Don is why I fundamentally disagree with your "Method D".

Cheers

Richard Received on Mon Aug 30 2004 - 07:31:59 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US