Re: Why do I get out of memory errors when 10GB memory is free?
Date: Sat, 18 Jul 2009 01:41:14 +0000 (UTC)
On Thu, 16 Jul 2009 20:32:27 +0200, Matthias Hoys wrote:
> Is this still the case?
Apparently, it is. I haven't heard of any recent changes in the way that SVR4.2 deals with the virtual memory. Why would that change? In a virtual memory system, one has to write the pages down when necessary. The backing store for the freshly allocated pages, as it's done with malloc or calloc, is swap space. What leads you to the conclusion that it has chaged?
>I thought that for larger amounts of memory it's
> no longer needed to have the same amount of swap as the physical memory.
There are pages that can be excluded from virtual memory handling. On Linux, we're talking about hugepages. Solaris has something similar. Other than that, if pages are a part of the virtual memory system, they'd better be covered by the swap space.
> I don't know for HP-UX, but we are running Oracle without problems on
> RHEL with 4GB of RAM and 2GB of swap space (if you consider Linux to be
> some kind of UNIX ;-)).
It's a stretch. RHEL has a debilitated version of Unix memory management.
It's for desktop users (wink, wink). Desktop users are generally
considered to be blithering idiots by the OS designers, so they made
changes to the proven algorithm which adjust the memory management so
that it can be used by any idiot. It's designed with the following goals
- It mustn't be easy to understand and flexible. It is "intelligent" instead.
- It takes the control away from the system administrator and gives it to the "artificial intelligence". With an Intel inside, a computer is allegedly more intelligent than the SA. - It must allow a person which has never read a page of manuals to utilize the system. It has to make the knowledgeable users pay for their arrogance.
Linux VM system achieved all of these goals very well.