Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: 9.0.2 64-bit on redhat 3.0r2 AMD64 (4-way) 17G RAM

Re: 9.0.2 64-bit on redhat 3.0r2 AMD64 (4-way) 17G RAM

From: Howard J. Rogers <hjr_at_dizwell.com>
Date: Sat, 28 Aug 2004 20:23:59 +1000
Message-ID: <41305d5d$0$7224$afc38c87@news.optusnet.com.au>


Artis Gripemore wrote:

> "Howard J. Rogers" <hjr_at_dizwell.com> wrote in message
> news:<412fc044$0$11963$afc38c87_at_news.optusnet.com.au>...
>

>> What you've done wrong is to want to use an 8GB SGA in the first place.
>> Pace other threads in this group, it is not obvious that extra memory
>> must result in better performance. Quite the opposite.

>
> Thanks. The general advice on the 'net is to get half the RAM for the
> db, and I was trying to do this. Now I find that this wisdom is
> wrong, which is OK with me as long as I get it right.
>
>> 24 MEG? You jest, surely. The shared pool generally ought to be much
>> larger than that. Only by tuning can one say how large, though.
>> 
>> > shared_pool_reserved_size is 2M
>> 
>> Do you ever get ORA-04031 errors? I'd expect you to with this sort of
>> setting.

>
> I didn't like these settings either.
>
> When they moved to 9, 64-bit, they kept the oraSIDinit.ora from their
> 8.1 setup. The problem was that this setup did not perform well under
> 8.1, much less 9. I am trying to tune it, and all the advice I am
> getting elsewhere is "crank the buffers," "use as much RAM as you
> can," etc. So I am trying.
>
> For instance, the formula for using 50% of RAM is right off Oracle's
> site.

Well, I might as well say it (I can only be proved wrong, after all): I bet the Oracle site says no such thing. It almost certainly says that under no circumstances should the allocated memory consume more than 50% of the available physical RAM. But that is a very different proposition from saying that it ought *to be* 50% of RAM.

> Furthermore, they say that PGA = RAM - (RAM for SGA + RAM for
> OS), right in the docs for pga_aggregate_target, which is rather
> misleading if the info here is correct.
>

>> > block size is 2048
>> 
>> Why? You're using Red Hat.

>
> True, but the db was created 2048 and there is nothing I can do about
> it. Not that I have not tried.

I don't understand how that statement squares with a statement you make below that the database is a test one. If it's test, you presumably have ample scope to change anything you like and *test* the consequences. So do so. The 2K block size will be at the root of most of your performance problems: it is waaaaay too small.

> It's a test machine--I can do what I want with it.

Apparently not... otherwise you wouldn't be willing to cripple yourself from the outside by accepting a previously ill-chosen block size.

But that aside: presumably you can simulate a reasonable test workload? If so, crack open Statspack and start tuning that way. An analysis of the SQL you are issuing, and the wait events you are suffering from is the only way to reliably tune a database.

> When I get it
> right, it will go into production. OLTP. It is maybe 6G of data, and
> transaction-heavy. It was a dog in 4G of RAM.

Once again, the assumption underlying that description is that memory will solve your problem. You haven't established that, and it probably isn't true. You've an init.ora stuffed with appalling settings for your key parameters, a crappily-chosen block size hampering every bit of physical I/O you do, and probably a lot of other issues besides. That it ran like a dog almost certainly has zilch to do with 4GB or 2Gb or 3.141GB of RAM.

It isn't sufficient to say, either, 'I don't want to be a DBA', if by that you mean 'I want a quick fix and please don't make me think too hard about it'. You don't have to be a DBA to want to fix it properly and long term, and that simply requires you, in the first instance, to know the tuning tools available to you.

Statspack is the main one, assuming you can simulate a proper test workload. Once you have your statspack report, feed it up to www.oraperf.com and see what it makes of it (analysis for free and in more or less two minutes). Implement the oraperf recommendations. Test again.

You want to find out what your wait events are, and eliminate those. Only when you have tried everything Oraperf can suggest, and the waits are still there should you consider increasing your SGA size.

As a start, I would set your db_cache_size to 640M, your shared_pool_size to 640M, your pga_aggregate_target to 200M and your shared_pool_reserved_size to 64M. Those are wild stabs in the dark which, fundamentally, are meaningless. But at least they are a bit more realistic for the key parameters, and will give you a vaguely-sensible starting point from which to tune further.

And to re-iterate a point about which I am probably getting boring: all of your tuning effort will be hugely compromised by that stupid 2K block size. If you truly want performance, sort that out first.

> I will post the entire init.ora, if that will help. To eliminate
> weirdness, I will go back to the original one and submit it for
> critique. I can then make changes (if any) from there.
>
> Thanks,
>
> Steve

Well, you could do that. But it really won't help much. No-one can say whether 800KB is OK for a buffer cache or 800TB. "It all depends". So for any given init.ora, some people are going to say 'Oh no!' and some are just going to shrug and move on.

Statpack, and then Oraperf. Those are your best bets to move forward meaningfully.

I would also be looking for another vendor. And another employer, if I am given a test system that I can't actually test with. But that's another matter, I suppose.

Regards
HJR Received on Sat Aug 28 2004 - 05:23:59 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US