Home » RDBMS Server » Performance Tuning » tuning insert
tuning insert [message #128065] Thu, 14 July 2005 17:46 Go to next message
mjschwenger
Messages: 137
Registered: July 2005
Location: US
Senior Member
One of my statements has the following tkprof:
The table is as you can see from the insert - most columns are numbers. I just run statistics for the whole table.
3 indexes -1 on PK(CONFLICT_ID,CONFLICT_RULE_ID),1 on CONF_ENTITY1 and 1 on CONF_ENTITY2.
INSERT INTO SEP_CONFLICT(CONFLICT_ID, SEP_TEST_ID,INBOUND_ENTITY_ID, CONF_ENTITY1, CONF_ENTITY2, DEGREE_OF_SEP, CONFLICT_STAT,WEAK_LIKE_SCORE, WEAK_REL_SCORE, CONFLICT_DT)VALUES (:1, :2,:3,:4,:5,:6,:7,'P',:8,:9,:10)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 3420 2.77 40.52 6881 129 35533 3420
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3421 2.77 40.52 6881 129 35533 3420

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 67

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 6881 1.23 38.13
SQL*Net message to client 3420 0.00 0.00
SQL*Net message from client 3420 0.00 2.06
********************************************************************************
If I need to troubleshoot this - what I need to look at first?
Why the db file sequential read on insert are so high?

Thanks a lot,mj
Re: tuning insert [message #128068 is a reply to message #128065] Thu, 14 July 2005 17:57 Go to previous messageGo to next message
mjschwenger
Messages: 137
Registered: July 2005
Location: US
Senior Member
I have similar insert worse the the previous:
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net break/reset to client 806 0.01 0.12
SQL*Net message to client 477 0.00 0.00
SQL*Net message from client 477 0.00 0.39
db file sequential read 239 0.00 1.14
********************************************************************************

what does the SQL*Net break/reset to client waits depends on?

Thanks a lot,mj
Re: tuning insert [message #128266 is a reply to message #128065] Sat, 16 July 2005 11:29 Go to previous messageGo to next message
smartin
Messages: 1803
Registered: March 2005
Location: Jacksonville, Florida
Senior Member
It looks like in your first case, you ran this insert statement 3420 times which took a total of 41 seconds. That is only .012 seconds per insert. That seems fairly fast.

What is your query, what is your code?

Why are you executing this insert statement, which does only one row at a time, over and over and over? Try to rewrite it so that you are using set/bulk operations instead of row at a time processing.
Re: tuning insert [message #128449 is a reply to message #128266] Mon, 18 July 2005 11:45 Go to previous messageGo to next message
ziggy
Messages: 27
Registered: July 2005
Location: Moscow, Russia
Junior Member

Or try parallel inserts in different sessions.

If you CPU have enough reserve (utilization less than 80%)
than next thing that you need to tune is redo log performance.

Good luck
Re: tuning insert [message #128450 is a reply to message #128449] Mon, 18 July 2005 11:50 Go to previous messageGo to next message
ziggy
Messages: 27
Registered: July 2005
Location: Moscow, Russia
Junior Member

In addition, SQL*Net messages indicates that the bottleneck in your application side. Try Smartin advice with bulk or batch inserts.

I hope that network performance not a problem in you case.
Re: tuning insert [message #128472 is a reply to message #128065] Mon, 18 July 2005 16:48 Go to previous messageGo to next message
mjschwenger
Messages: 137
Registered: July 2005
Location: US
Senior Member
Thanks a lot, guys.
First - I cannot touch the application. May there'll be changes but not early then 5-6 months and in the same time, I have to make it work. There's no option for batching. The app gets 1 records from external source and inserts different parts of it into different tables. Then compare this product to all other products already in the tables and populates the report tables. The trans/sec are very high - like 3500/sec. The inserts start at 18,000/min and gradualy drop with the data inserted, but I still have to keep the speed as high as possible.
I know that the application we are running has logic issues - inserting and selecting from the same tables, but there's nothing I can do... except to try to tune up Oracle side as much as I can.
For this reason the parallel inserts cannot be specify in the code. Can I specify them out of the query - like table level... I already set the parallel server to 3 although this is OLTP, not DW, and in table storage MAXTRAN - 60... How else I could cotrol the parallel processing? Is parallel goign to be beneficial to me as this is a high OLTP?
My redo logs are 8 by 2G each. They are changing evey 30 min.
What else I can do to tune them up?
Log Synch evewn is in my top 5 in statpack - any ideas for that?
MY buffer log is 10M as Oracle adviced for huge applications.
any other ideas?
Thanks in advince,
mj


Re: tuning insert [message #128515 is a reply to message #128472] Tue, 19 July 2005 02:11 Go to previous messageGo to next message
ziggy
Messages: 27
Registered: July 2005
Location: Moscow, Russia
Junior Member

If you already evaluate wait events for redo logs try to think - is its weight significant? What improvement do you get if you increase redo log performance on 15 or 30 percent? Would you be happier?

If you log switched every 30 minutes, it's ok. Are you log locate in separate disk or you have a RAID or SAN? Please give us your hardware and software configuration, CPU, storage, file location and so on.

It seems to me you have a good performance.
Smile
Re: tuning insert [message #128564 is a reply to message #128065] Tue, 19 July 2005 07:35 Go to previous message
smartin
Messages: 1803
Registered: March 2005
Location: Jacksonville, Florida
Senior Member
Yes, please provide more information as Ziggy has mentioned, hardware, memory, disk system, and major memory parameters.

Also more about your application and its code / logic. Perhaps there is something you can do with the physical storage and structure of your tables (like iot or partition) but we don't know enough to know.

Also I don't see how parallel will help, as that is useful for a long running operation but in your case you have multiple short running operations.

Binding would help (don't know if application does or not?) but if you can't change the code maybe can investigate cursor sharing parameter. Test any changes first though.

And are you sure you literally can't change anything, not even create some stored procedures or something for the existing app to call or interface with? How does it work now, you just install some precreated scripts and users and stored procs?
Previous Topic: how to get the explain plan of a insert statement
Next Topic: Temporary Tablespace
Goto Forum:
  


Current Time: Thu Apr 25 16:32:33 CDT 2024