RE: External table performance

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Sun, 14 Jan 2018 08:23:32 -0500
Message-ID: <13e101d38d3a$e1730ef0$a4592cd0$_at_rsiz.com>



Roger that. This was a problem that database file systems will eventually solve with an implicit index on file name.  

Some 1960’s file systems (DTSS, SIGMA) that used an ordered insertion name tree instead of plowing through a linked list of inodes did this just fine, but presuming fewer entries per directory, the UNIX guys went for the slightly faster (on creation) linked list.  

Tanel wrote up a classic performance track down at Hotsos several years ago where no waits were traceable to the Oracle engine and he had to get all the way down to “find the file” in subroutines before the problem became apparent.  

At COAT circa 1990 a very useful thing our OS admins did was monitor the number of entries in any single “folder” for analysis by the DBA and applications team.  

At the time, this led Oracle to put the different alerts and logs and so forth into different trees when we demonstrated it was a problem. It helped convince them that back then inode table expansion required a machine reboot, so things weren’t just slow and they actually blew up.  

Several years ago Oracle reversed that decision for simplicity as faster hardware, larger memory, and dynamic inode tree growth made it less noticeable. I groaned. How big is too big is a slippery slope.  

mwf  

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Mladen Gogala Sent: Saturday, January 13, 2018 7:50 PM To: oracle-l_at_freelists.org
Subject: Re: External table performance  

Hi Kellyn,

With 20+ millions of directory objects, problems are inevitable. File systems are not made for that kind of load. The file names in the directory are either searched sequentially or by binary search, which is OK if there are few thousands of the directory entries, which fit in memory, but would likely cause problems with 20+ millions of directory entries. File systems are not meant for 20+ millions of files in a directory.

Regards  

On 01/12/2018 06:11 PM, Kellyn Pot'Vin-Gorman wrote:

I’d only just heard of it recently, but the DBA experiencing it said it was any pull from external table to do bulk loads. There’s a number of bugs in Oracle support for the search criteria: “12.1 external table read”  

Bug 21553593 20+ million directory objects causing slow reads from external tables

Bug 19597583 "external table reads" during local PDB                                             

--

Mladen Gogala
Database Consultant
Tel: (347) 321-1217

--

http://www.freelists.org/webpage/oracle-l Received on Sun Jan 14 2018 - 14:23:32 CET

Original text of this message