Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.misc -> Re: How make data transfomation faster?
grrey_at_mail.ru wrote:
> Hi all.
> There is a table with about 3,000,000 recs in a month:
>
> create table Log (
> User_id number(12),
> Date_time date,
> kind number(1) --1 for start_date, 2 for finish_date, 3 for
> start_date cancel, 4 for finish date cancel
> );
hopefully these generic column names are just for your example.
>
> I need get this table:
>
> create table stat(
> User_id number(12),
> Start_Date date,
> Finish_Date date);
>
> Now making it that way:
>
> create index stat_indx on stat(User_id,finish_date);
>
> begin
>
> for rec in (select * from log) loop
>
> if rec.kind=1 then
> insert into stat(user_id,start_date)
> values (rec.user_id,rec.date_time);
> end if;
>
> if rec.kind=2 then
> update stat
> set finish_date=rec.Date_time
> where user_id=rec.user_id
> and finish_date is null;
> end if;
>
> if rec.kind=3 then
> delete from stat
> where user_id=rec.user_id
> and finish_date is null;
> end if;
>
> if rec.kind=4 then
> update stat
> set finish_date=null
> where user_id=rec.user_id
> and finish_date = (select max(finish_date) from stat
> where user_id=rec.user_id);
> end if;
>
> end loop;
>
> end;
>
> but it's too slow. How it could be done faster? I'v tried TABLE OF, but
> it takes too much memory.
PL/SQL often is slower than SQL.
>
> PS. Sorry, if there was that post already. I just don't know how to
> find it. If anyone knows that, please, tell me.
GOOGLE is a great way to review newgroups for previous posts.
Note I'm making this critical assumption:
there is never a repeated ID,kind row. IOW this never happens:
ID datetime kind
1 xxxxxxx 1
1 yyyyyyy 1
Note your cursor is doing a FULL TABLE SCAN every time.
consider 4 cursors
(and honestly I don't understand what is supposed to happen when
kind=4)
I'll leave the other loops as exercise for the reader. The key is recognizing that the log table is really 4 tables masquarading as one. Treat it that way. (better yet would be to change your data model to avoid this whole mess.)
HTH,
Ed
Received on Wed Feb 01 2006 - 15:56:35 CST