Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: 2 GB myth

Re: 2 GB myth

From: Noons <wizofoz2k_at_yahoo.com.au>
Date: 29 Nov 2004 22:38:50 -0800
Message-ID: <73e20c6c.0411292238.6810e874@posting.google.com>


joel-garry_at_home.com (Joel Garry) wrote in message news:<91884734.0411291331.5eda9410_at_posting.google.com>...

>
> This is good to know, I always thought (based on older experience)
> Oracle would barf with an OS error when it tried to open one too many
> files. Would you know which platform(s) the penalty applies to?

I've seen it in Pyramid OS/X and Dynix. Dunno if it holds true with Linux. Found a few relatively recent references to this problem for both Linux and Solaris. Basically, Solaris 32-bit continues to be limited to 256 open files/process. Solaris 64-bit of course has a very high limit. AFAIK, AIX had a similar restriction not too long ago, couldn't trace the version. Linux 2.2 had the problem. 2.4 appears to have corrected it although there are some reports of people hitting problems around 2048 files (nothing to do with kernel limits, probably just a badly declared C-integer somewhere in the file system code). The usual way of increasing this in 2.4 is with echo <new_value> > /proc/<tag>
and that can be very dangerous if the rest of the code is broken. 2.6 appears to be free of the problem, but no one did confirm if it was the 32 or 64 bit version.

This is not just a matter of increasing ulimit, as I'm sure you know: ulimit is a process limits checker, it doesn't change anything in the kernel. We can usually set ulimit way beyond what a given kernel will support.

And an upper limit is very easy to hit directly or indirectly: everything in Linux or Unix is represented as a file unit including shared memory segments, network connections, real files, directories, etcetc. It doesn't take much for a server process to hit a few hundred open files.

So it may indeed be a good idea to restrict the number of files (make them bigger) for very large databases. 50Gb seems to be a size that most large capacity/high speed tape drives can cope with in a single volume so maybe that is the next "rule of thumb" boundary? It's certainly consistent with the practice I'm seeing of people running Tb-class dbs.

Dunno about Windoze? Received on Tue Nov 30 2004 - 00:38:50 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US