Low FD limit a performance issue?

From: Herring Dave - dherri <Dave.Herring_at_acxiom.com>
Date: Wed, 2 Nov 2011 13:49:45 +0000
Message-ID: <BD475CE0B3EE894DA0CAB36CE2F7DEB4454A87B5_at_LITIGMBCRP02.Corp.Acxiom.net>

I did a crazy thing the other day - reviewed an alert log from a system I inherited a while ago. I found that after a recent auto-restart a message was displayed: "WARNING:Oracle instance running on a system with low open file descriptor limit. Tune your system to increase this limit to avoid severe performance degradation.". Sure enough, on this system /etc/init.d/init.crsd has the line "ulimit -n unlimited" in it, which on RHEL 4 generates an error, so the FD limit defaults to 1024 (bug 5862719).

The problem is easy to resolve but my question is on Oracle's warning. How could a low open file descriptor limit be a potential source of "severe performance degradation"? Isn't it a black-or-white issue, either the limit is high enough or if not, the db won't open/you can't add more datafiles?

Acxiom Corporation

EML   dave.herring_at_acxiom.com
TEL    630.944.4762
MBL   630.430.5988 

1501 Opus Pl, Downers Grove, IL 60515, USA WWW.ACXIOM.COM The information contained in this communication is confidential, is intended only for the use of the recipient named above, and may be legally privileged. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please resend this communication to the sender and delete the original message or any copy of it from your computer system. Thank you.
Received on Wed Nov 02 2011 - 08:49:45 CDT

Original text of this message