Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Memory Limit Imposed on Oracle by Windows?

Re: Memory Limit Imposed on Oracle by Windows?

From: joel garry <joel-garry_at_home.com>
Date: 22 Apr 2007 18:38:47 -0700
Message-ID: <1177292327.703563.218160@y5g2000hsa.googlegroups.com>


On Apr 20, 2:47 pm, dbaplusp..._at_hotmail.com wrote:
> On Apr 20, 3:32 pm, sybra..._at_hccnet.nl wrote:
>
> > On 19 Apr 2007 08:14:40 -0700, dbaplusp..._at_hotmail.com wrote:
>
> > >On Apr 19, 5:22 am, "Dereck L. Dietz" <diet..._at_ameritech.net> wrote:
> > >> > That's my take. The limitations to 4gb are all based on the 32-bit
> > >> > architecture. 2 to the 32nd power = 4 billion = 4GB. That's the math
> > >> > side of it; the direct memory addressing limitation of a 64-bit
> > >> > architecture is... well, a lot more than 4GB.
>
> > >> > Definitely do your own digging into the specs of 32-bit versus 64-bit
> > >> > Windows, *and* 32-bit versus 64-bit Oracle. But I think you'll find
> > >> > that the resourcing under 64-bit is a lot more sizeable, and all the
> > >> > hokey config jazz you have to do under 32-bit Windows won't be needed.
>
> > >> Thanks.
>
> > >IIn 32 bit, limit is 2GB and not 4GB
>
> > Incorrect. When booted with the correct switches the limit is 3 Gb,
> > not 2 Gb. And adding just another Gig usually won't help you a damn,
> > unless you subscribe to the silver bullet religion promoted by DKB.
>
> > --
> > Sybrand Bakker
> > Senior Oracle DBA
>
> I was referring to plain 32 bit Oracle without doing anything special
> is OS. On UNIX system, Oracle always refer
> to limit of 2GB. I was suprised why some people are syaing limit is
> 4GB.
>
> I do not subscrbe to any silver bullet, yet open to taking advnatge
> of 64 bit Oracle and setting large db_buffer cache.

The 2GB limit referred to on unix is file size, not memory size. It refers to older unix where that was the maximum file size, and some older Oracle utilities had issues dealing with larger files as the world moved away from the limitation.

Setting an overly-large cache isn't quite the big deal it used to be in earlier Oracle versions, but you still have to have at least a latch per buffer, and this can lead to some excess cpu being devoted to managing all those latches and buffers. You really should get in the habit of figuring out what a proper buffer size is. Tools exist (OEM, for example) to make pretty pictures and recommendations, and it is worthwhile to learn how to figure it out by the command line (such as v$bh and the advisor views) so you can understand what is going on. Also, if you make things too large, you risk running into really strange effects or bugs that can misdirect you when you look to see what is wrong. This can be especially confusing when you have two different bottlenecks near each other. For example, if your cpu is saturated and your network is not quite saturated, and you fix what is saturating the cpu, your total response might get worse as so many more processes can run and give their results back to the network.

Why take away memory from user-processes unnecesarily?

jg

--
@home.com is bogus.
"It wasn't until later that we realized that we were being exploited,
and then you think and you look back and say, 'Boy, I was dumb for
letting that happen to me.' " - Luis Verduzco, legal immigrant not
paid overtime by Lakewood Building Systems of Canada
The carpenters committee used a commercial database used by the
federal government to discover whether workers were using legitimate
Social Security numbers. It found that out of 189 workers, 112 had
false numbers.
Received on Sun Apr 22 2007 - 20:38:47 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US