Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Global Temporary Table
You've managed to pick a fairly extreme case that demonstrates Oracle's general strategy for handling table free space - it doesn't apply to just global temporary tables.
If a single transaction deletes enough rows from a table to put the block on the transaction's free list, then that transaction can use the free space thus created to insert new rows into the table (when the segment free lists are empty). In other words, a transaction can re-use the space made available by rows it has deleted.
HOWEVER - even though the transaction can reuse the row space, it leaves a row stub (about 5 bytes, I think, but I'd have to dump a block to check) for the deleted row, which can only be cleaned out after the transaction has committed.
In your case, you
delete 20 rows
insert 20 rows
but the last 20 rows are now 20 stubs
delete 20 rows
insert 20 rows
which get you to 40 stubs
delete 20 rows
insert 20 rows
now up to 60 stubs
and so on....
eventually the stubs fill the block, and the next insert runs into the next block.
If you dumped a block from the table, you
would probably find about 1,500 row
entries per block - all of them just stubs.
If you commit each time around the cycle, the insert does a full block cleanout, and is allowed to eliminate the stubs.
-- Regards Jonathan Lewis http://www.jlcomp.demon.co.uk/faq/ind_faq.html The Co-operative Oracle Users' FAQ http://www.jlcomp.demon.co.uk/seminar.html Optimising Oracle Seminar - schedule updated Sept 2nd "Laly" <lalydba_at_free.fr> wrote in message news:6f8684ef.0409072331.21188c4d_at_posting.google.com...Received on Thu Sep 09 2004 - 16:01:07 CDT
> Now we recreated our temp tablespace as a DMT with tempfiles.
>
> Here is my table :
> create global temporary table test (
> x number,
> y number,
> z char(100)
> )
> on commit preserve rows;
>
> And my procedure :
>
> create or replace procedure p_test as
> l_start number;
> l_end number;
> begin
> l_start := dbms_utility.get_time;
> for i in 1..10000
> loop
> if i mod 100 = 0 then
> l_end := dbms_utility.get_time;
> dbms_output.put_line(i || '=>' || round((l_end-l_start)/100) || '
sec');
> l_start := l_end;
> end if;
>
> delete test;
>
> for j in 1..20
> loop
> insert into test(x)
> values (j);
> end loop;
>
> update test
> set y = x,
> z = '1';
>
> end loop;
> end;
>
> The output is :
> 100=>2 sec
> ...
> 1000=>3 sec
> ...
> 1400=>4 sec
> ...
> 1800=>5 sec
> ...
> 2600=>6 sec
> ...
> 3100=>7 sec
> ...
> 3800=>8 sec
> ...
> 4000=>9 sec
> ...
> 4800=>10 sec
> ...
> 5500=>11 sec
> ...
> 9900=>17 sec
> 10000=>18 sec
>
> My thought is that if space was reused, the GTT would stay in a reasonable
size
> instead of spending a lot of time doing the full table scan.
>
> I have to say that if we commit inside the loop every 500 step, we don't
have
> this performance issue.
>
> Can somebody reproduce this problem and explain what is happening here,
please ?
>
>
> Best regards,
>
> Laly.