Re: INSERT statement hangs due to library cache lock

From: Vadim Keylis <vkeylis2009_at_gmail.com>
Date: Mon, 3 Oct 2016 22:48:03 -0700
Message-ID: <CAHL4c1Nss4BNfOECFm9tce4LJ+r616OU0c-HEg5nu89g19e2kA_at_mail.gmail.com>



Mark,
Thanks so much for your suggestion. I will explore possibility of changing insert statement to insert directly into partition.

   I believe the root cause of the issue was due to large amount of daily partitions been dropped. As I understand anytime partition is dropped it changes data dictionary and invalidates sql cached in library cache. To invalidate sql the process need to acquire lock to library cache. This  table is heavy on conventional inserts from application which need to acquire lock to library cache to get sql. Since insert sql cached previously was invalidated, it to be hard parsed again. While it been parsed it holds lock to library cache and every other process has to wait till lock becomes available including process that was dropping partitions. I was dropping 2 years worse of daily partitions on the table heavy with inserts. Unfortunately we have to use conventional inserts.

      My game plan is to throttle dropping partition on the table to minimize on frequency on invalidating cached insert statement sql plus scheduling dropping partitions daily should minimize the impact as well.

Thanks so much again everyone for amazing help in resolving the issue. Vadim

On Sat, Oct 1, 2016 at 5:37 AM, Mark W. Farnham <mwf_at_rsiz.com> wrote:

> Mark’s note is the odds-on favorite to explain your lock.
>
>
>
> Now, what can you do about it?
>
>
>
> A few things come to mind (which I think I’ve tested in the past, but
> which you should re-test in your environment which is likely many versions
> since my tests.)
>
>
>
> 1) Change the way you insert:
>
> a. Insert directly into the current partition or subpartition (If
> this would be the creation of an interval partition, you can do the first
> approximate blockfull via the table name so it still gets automagically
> created). If memory serves this makes the lock on the partition, and since
> the drops on different partitions you shouldn’t have a problem.
>
> b. Insert into a matching template table and use partition exchange.
> (Since you apparently only have local indexes, this is probably a very good
> option for you).
>
> c. Unless your inserts are very small, you probably do **NOT** want
> to switch to conventional inserts. (If you are already conventional and
> you’re getting the long duration exclusive lock, please let us know.)
>
> 2) Change the way you drop: Since you apparently only have local
> indexes, exchanging an empty partition in for the old one and then dropping
> the partition should decrease the duration of any locks to a couple of
> dictionary operations. Oh, and then you can relocate and compress the old
> partition to create keep forever images, back up, and/or destroy to your
> heart’s content without affecting locks on the production table. (And only
> affecting much at all in the case of the read load on the existing
> tablespaces’ file(s) and some cpu during the relocation and/or compress.)
>
>
>
> Please let us know how that pans out for you, or if you discover your
> difficulty is something else.
>
>
>
> Thanks,
>
>
>
> mwf
>
>
>
> *From:* oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_
> freelists.org] *On Behalf Of *Mark J. Bobak
> *Sent:* Wednesday, September 28, 2016 4:41 PM
> *To:* Vadim Keylis
> *Cc:* Oracle-l
> *Subject:* Re: INSERT statement hangs due to library cache lock
>
>
>
> Hi Vadim,
>
>
>
> Are your inserts doing 'INSERT /*+ APPEND */'?
>
>
>
> Append implies an exclusive, rather than shared lock.
>
>
>
> -Mark
>
>
>
> On Wed, Sep 28, 2016 at 4:36 PM, Vadim Keylis <vkeylis2009_at_gmail.com>
> wrote:
>
> Good afternoon. We have partition table that is partition by day with two
> local indexes. One index non unique the other is unique, but both indexes
> are local to a partition. We never dropped partitions since table was
> created in 2012. We were dropping all partition through 9/24. Looking at
> the Enterprise Manager Cloud Control we observed the spike in insert
> statements into that table. The spike was caused by library cache lock.
> None of the insert statements would go into partitions we were dropping.
> Will appreciate an advise why dropping partition would cause library cache
> lock on the table?
>
>
>
> Thanks so much in advance,
>
> Vadim
>
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Oct 04 2016 - 07:48:03 CEST

Original text of this message