Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: Very slow load with trigger..will more RAM help?

Re: Very slow load with trigger..will more RAM help?

From: Craig M. Wall <cwall_at_petersons.com>
Date: 1997/11/10
Message-ID: <6475l9$d3n@news9.noc.netcom.net>

Kelly,

Although Chrysalis is correct about the full table scans triggered for each row, I notice from another of your posts that this database is running on just one disk.
Severe I/O contention is also a major problem in your setup that more RAM will not properly alleviate. And even though it is a prototyping machine, running in this improperly configured environment will not allow you to extrapolate the results of your coding efforts with any degree of accuracy. Spread the database structures over at least three SCSI (not IDE) disks, which is still a compromise, otherwise you will continue to waste days.

Craig M. Wall

Chrysalis wrote in message <3464E572.7D7F_at_iol.ie>...
>kgrigg_at_acxiom.com wrote:
>>
>> Hello All!
>>
>> I am running Oracle 8 on Win NT server 4.0 on a pentium 200 with 64 MB
>> RAM. This is just a prototyping machine. Most everything loads in avery
>> reasonable amount of time. However, I have one table I am loading with a
>> fairly complex trigger that takes at least a DAY to load for each load.
>> Each load is about 4000 to 6500 records. Here is what the trigger
>> basically does.
>>
>> For each record inserted, part of its primary key is checked agains a
>> view which is a UNION of 5 tables. If it is present in the view, then
>> the trigger generates a sequence number...this sequence number plus part
>> of the inserted record in inserted into another table. The sequence
>> number and the primary key information is inserted into a 3rd table.
>>
>> There is one other step to see if the record being inserted into the 1st
>> table is in one particular table and adds a value to the field being
>> inserted into the 3rd table. So, basically, this table mentioned first
>> with this trigger is basically a 'load' table. The values entered into
>> it are checked and split to be entered into 2 other tables.
>>
>> This process works VERY slow. Would adding more RAM help speed this up?
>> If so, what would be a good amount...$$ isn't really an object here.
>>
>> This is my first trigger, is there a way to tune it so it would work
>> faster?
>>
>> All suggestions welcomed!!
>> TIA!
>>
>> Kelly
>> kgrigg_at_acxiom.com
>>
>> Code for Trigger:
>>
>> declare
>> cursor c_individ is
>> select i.individual_type
>> from individuals i
>> where i.book_book_number = :new.book_book_number
>> and i.book_pub_date = :new.book_book_pub_date
>> and i.section_number = :new.section_number
>> and i.page_number = :new.page_number
>> and i.listing_number = :new.listing_number;
>> cursor c_individ_count is
>> select count(x.individual_type) ind_count
>> from individuals x
>> where x.book_book_number = :new.book_book_number
>> and x.book_pub_date = :new.book_book_pub_date
>> and x.section_number = :new.section_number
>> and x.page_number = :new.page_number
>> and x.listing_number = :new.listing_number;
>> cursor c_entry_exists is
>> select entries
>> from nap_valid_entries
>> where entries in
>> (select
>> (to_char(:new.book_book_number)||to_char(:new.book_book_pub_date)||
>>
>> to_char(:new.section_number)||to_char(:new.page_number)||
>> to_char(:new.listing_number))
>> as entries
>> from dual);
>> v_entry_exists c_entry_exists%ROWTYPE;
>> v_indiv_count c_individ_count%ROWTYPE;
>> v_indiv_type individuals.individual_type%TYPE;
>> begin
>> --Block to load Addresses and Address_Associations
 tables
>> from Address_Load
>> open c_entry_exists;
>> fetch c_entry_exists into v_entry_exists;
>> close c_entry_exists;
>> if v_entry_exists.entries IS NOT NULL then
>> insert into addresses
>>
>>

 values(seq_pre_nap_address_id.nextval,:new.zip_book,:new.room,NULL,NULL,:new
>> ..street_directional,
>>
>>

 new.street_number,:new.street_name,:new.rural_route_address,:new.stated_com
>> munity,
>> :new.po_box_number,NULL,:new.fire_locator,NULL);
>> open c_individ_count;
>> --Begin search to see if individual and fetch c_individ_count into
>> v_indiv_count;
>> --if so fill type with P or S, if not then with X
>> close c_individ_count;
>> if v_indiv_count.ind_count > 0 then
>> for individ_record in c_individ loop
>> v_indiv_type := individ_record.individual_type;
>> insert into address_associations
>>
>> values(:new.book_book_number,:new.book_book_pub_date,:new.listing_number,
>>
>>

 new.section_number,:new.page_number,seq_pre_nap_address_id.currval,v_indiv_
>> type);
>> end loop;
>> else
>> insert into address_associations
>>
>>

 values(:new.book_book_number,:new.book_book_pub_date,:new.listing_number,:ne
>> w.section_number,
>> :new.page_number,seq_pre_nap_address_id.currval,'X');
>> end if; --count loop
>> end if; --exists loop
>> end;
>>
>> -------------------==== Posted via Deja News ====-----------------------
>> http://www.dejanews.com/ Search, Read, Post to Usenet
>
>OK.
>First thing is that the strategy seems to be wrong: single-row triggers
>are
>always going to be slow for a bulk load.
>
>I would always prefer SQL statements of the sort:
>insert into addresses (<column_list>) -- ALWAYS use a column_list
>select ... from address_load AL
>where exists
> (select null from individuals I
> where I.col1 = AL.col1
> and I.col2 = AL.col2
> ...)
>
>and
>
>insert into address_associations (<column_list>)
>select ..., 'X' from address_load AL
>where not exists
> (select null from nap_valid_entries NVE
> where ...)
>
>This should be fine for the addresses table.
>However, you have two complications for the maintenance of the
>address_associations table:
>1) You need the same sequence number to be used as was used in the
>insert into addresses.
>2) You need the same individual_type (from individuals) where a row DOES
>exist
> in nap_valid_entries.
>
>There may be ways round this, but let's stick with your existing
>strategy:
>
>1) You don't need to select a count(*) in c_individ_count. Just fetch
>one row
>to see if one exists.
>
>2) I just bet that the entries column in the nap_valid_entries view is a
>concatenation of the fields:
>
>book_book_number||book_book_pub_date||section_number||page_number||listing_
 number
>
>from each of the five unioned tables.
>
>This means that no indexes can be used on ANY of the tables in the view.
>You are therefore having to do a full table scan on each of the five
>unioned tables.
>For every row inserted!
>THIS IS THE MAIN REASON FOR THE POOR PERFORMANCE.
>
>Redefine the view to make these separate view columns.
>
>Your c_entries_exists cursor then becomes:
>select null -- you don't actually need any values, do you?
>from nap_valid_entries
>where book_book_number = :new.book_number
> and book_pub_date = :new.book_pub_date
> and ...
>
>Assuming that at least some (preferably all) of these columns are
>indexed in
>ALL of the tables comprising the view, then they can be used as an
>access path.
>
>3) The most elegant/efficient way to insert into address_associations
>is:
>
>open c_entries_exists;
>fetch c_entries_exists into dummy; --defined as varchar2(1);
>if c_entries_exists%found then
> ind_type := :v_indiv_type;
>else
> ind_type := 'X'
>end if;
>close c_entries_exists;
>
>Then you just need one insert statement for address_associations,
>using the above value of ind_type.
>
>HTH.
>
>
>--
>Chrysalis
>
>FABRICATI DIEM, PVNC
>('To Protect and to Serve')
>Terry Pratchett : "Guards Guards"
Received on Mon Nov 10 1997 - 00:00:00 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US