RE: External table performance
Date: Sun, 14 Jan 2018 08:23:32 -0500
Message-ID: <13e101d38d3a$e1730ef0$a4592cd0$_at_rsiz.com>
At the time, this led Oracle to put the different alerts and logs and so forth into different trees when we demonstrated it was a problem. It helped convince them that back then inode table expansion required a machine reboot, so things weren’t just slow and they actually blew up.
Several years ago Oracle reversed that decision for simplicity as faster hardware, larger memory, and dynamic inode tree growth made it less noticeable. I groaned. How big is too big is a slippery slope.
mwf
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Mladen Gogala
Sent: Saturday, January 13, 2018 7:50 PM
To: oracle-l_at_freelists.org
Subject: Re: External table performance
Hi Kellyn,
With 20+ millions of directory objects, problems are inevitable. File systems are not made for that kind of load. The file names in the directory are either searched sequentially or by binary search, which is OK if there are few thousands of the directory entries, which fit in memory, but would likely cause problems with 20+ millions of directory entries. File systems are not meant for 20+ millions of files in a directory.
Regards
On 01/12/2018 06:11 PM, Kellyn Pot'Vin-Gorman wrote:
I’d only just heard of it recently, but the DBA experiencing it said it was any pull from external table to do bulk loads. There’s a number of bugs in Oracle support for the search criteria: “12.1 external table read”
Bug 21553593 20+ million directory objects causing slow reads from external tables
Bug 19597583 "external table reads" during local PDB
--
Mladen Gogala
Database Consultant
Tel: (347) 321-1217
--
http://www.freelists.org/webpage/oracle-l
Received on Sun Jan 14 2018 - 14:23:32 CET