RE: Low FD limit a performance issue?

From: Herring Dave - dherri <Dave.Herring_at_acxiom.com>
Date: Thu, 3 Nov 2011 17:48:27 +0000
Message-ID: <BD475CE0B3EE894DA0CAB36CE2F7DEB4454A9CD4_at_LITIGMBCRP02.Corp.Acxiom.net>



Currently the base is 1024 and we're working with ASM, so in the end we'd only have to worry if somehow a database process needed to open more than 1024 non-ASM database files, which I can't imagine how that'd ever be the case in our environment. Maybe this only applies to non-ASM dbs. DAVID HERRING
DBA
Acxiom Corporation
EML dave.herring_at_acxiom.com
TEL 630.944.4762
MBL 630.430.5988
1501 Opus Pl, Downers Grove, IL 60515, USA WWW.ACXIOM.COM<http://www.acxiom.com/>

[X]
The information contained in this communication is confidential, is intended only for the use of the recipient named above, and may be legally privileged. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please resend this communication to the sender and delete the original message or any copy of it from your computer system. Thank you.

From: Kurt [mailto:kurtengelo_at_gmail.com] Sent: Wednesday, November 02, 2011 6:32 PM To: Herring Dave - dherri
Cc: oracle-l List
Subject: Re: Low FD limit a performance issue?

Hi Dave,

I didn't have time to actually test this - so take it for what it's worth.

I believe what happens if you don't have a sufficient FD limit is that when the Oracle database hits the number of open files, it will close a currently open file and open the file it wanted to read.

In the past, that was a sufficiently great performance hit that someone decided to put that message out to the alert log. It would be a mildly interesting exercise to see if it would still be 'severe performance degradation'. Second interesting test would be if the FD limit were set to a wildly low limit - such as 4.

I hope this helps.
Kurt

On Wed, Nov 2, 2011 at 6:49 AM, Herring Dave - dherri <Dave.Herring_at_acxiom.com<mailto:Dave.Herring_at_acxiom.com>> wrote: I did a crazy thing the other day - reviewed an alert log from a system I inherited a while ago. I found that after a recent auto-restart a message was displayed: "WARNING:Oracle instance running on a system with low open file descriptor limit. Tune your system to increase this limit to avoid severe performance degradation.". Sure enough, on this system /etc/init.d/init.crsd has the line "ulimit -n unlimited" in it, which on RHEL 4 generates an error, so the FD limit defaults to 1024 (bug 5862719).

The problem is easy to resolve but my question is on Oracle's warning. How could a low open file descriptor limit be a potential source of "severe performance degradation"? Isn't it a black-or-white issue, either the limit is high enough or if not, the db won't open/you can't add more datafiles?

DAVID HERRING
DBA
Acxiom Corporation
EML dave.herring_at_acxiom.com<mailto:dave.herring_at_acxiom.com> TEL 630.944.4762<tel:630.944.4762>
MBL 630.430.5988<tel:630.430.5988>
1501 Opus Pl, Downers Grove, IL 60515, USA WWW.ACXIOM.COM<http://WWW.ACXIOM.COM>

The information contained in this communication is confidential, is intended only for the use of the recipient named above, and may be legally privileged. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please resend this communication to the sender and delete the original message or any copy of it from your computer system. Thank you.
--

http://www.freelists.org/webpage/oracle-l

--

The opinions expressed in this email are my own, and not my company's.

--

http://www.freelists.org/webpage/oracle-l Received on Thu Nov 03 2011 - 12:48:27 CDT

Original text of this message