Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: IOT, memory and transaction time

Re: IOT, memory and transaction time

From: <deja_at_2bytes.co.uk>
Date: 17 Apr 2007 06:56:41 -0700
Message-ID: <1176818201.833425.80190@n59g2000hsh.googlegroups.com>


On Apr 17, 11:34 am, d..._at_2bytes.co.uk wrote:
> On Apr 17, 10:39 am, Robert Klemme <shortcut..._at_googlemail.com> wrote:
>
>
>
> > On 17.04.2007 11:26, d..._at_2bytes.co.uk wrote:
>
> > > We have a table with 4 columns - col1 int, col2 int, col3 char(32),
> > > col4 char(32).
>
> > > Previously there were 2 indexes
> > > unique index1 on col3, col1
> > > index2 on col2, col1
>
> > > Reads always read all columns.
>
> > > We have recently changed the way this table works. The columns are all
> > > the same but because of the way we put data in we have changed the
> > > indexes.
>
> > Please provide complete DDL for old and new table layout. You can
> > easily generate it with DBMS_METADATA.GET_DDL if your verison of Oracle
> > includes it.
>
> > > Now the table is an Index Ordered table with col2,col3 as PRIMARY KEY.
>
> > IOT = Index Organized Table.
>
> > > There is an additional index on col3,col2.
>
> > Why do you put another index on the same set of columns?
>
> > > Reads have improved quite a lot but there is a problem with the
> > > writes. Previously, when adding 40,000 records at a time, write times
> > > were pretty consistent no matter how big the table (this transaction
> > > used to actually involve 200 updates of the same table followed by
> > > 40,000 inserts). With the new indexes, they get progressively worse
> > > with every write plateauing at around 10 minutes!! (this new
> > > transaction involves updating a dummy table with one record and one
> > > column 200 times (using a where clause that references the other
> > > table), followed by 40,000 inserts). The update part of the
> > > transaction basically checks that no-one else has updated since you
> > > read.
>
> > > Why have write times changed so dramatically? Why do they get
> > > progressively worse? If it was the update statement that references
> > > the other table with a where clause, then why have Read times improved
> > > (using pretty much the same clause)? So I figured it probably wasn't
> > > the Update statement. So, looking at the Insert, why has this become
> > > so different. The columns are the same, it is just that one is Index
> > > Ordered and one isn't. The 40000 inserts will all have a col2 value
> > > that is the same but is greater than any previous write of col2. The
> > > col3 values may not be in order when written back - so is it just a
> > > memory issue? If we allocate more memory to the Oracle instance will
> > > that solve it? Does the memory have to be allocated to a specific area
> > > or is the SGA sufficient?
>
> > My first guess would be that because of the changed physical structure
> > inserts are generally more complex and generate more IO.
>
> > Kind regards
>
> > robert
>
> Thanks Robert,
>
> Here is the DDL...
>
> New DDL:
> CREATE TABLE R_TABLE1 (
> deleted INTEGER NOT NULL,
> versionNo INTEGER NOT NULL,
> Id char(32) NOT NULL,
> Rel char(32) NOT NULL,
> CONSTRAINT R_TABLE1_1 PRIMARY KEY(versionNo,Id, Rel),
> FOREIGN KEY (deleted) REFERENCES T_TRANSACTIONS (TransNum),
> FOREIGN KEY (versionNo) REFERENCES T_TRANSACTIONS (TransNum),
> CHECK (versionNo >= deleted))
> ORGANIZATION INDEX;
>
> CREATE INDEX R_TABLE1_2 ON R_TABLE1 (Id, versionNo);
>
> Old DDL:
> CREATE TABLE R_TABLE1(
> deleted INTEGER NOT NULL,
> versionNo INTEGER NOT NULL,
> Id char(32) NOT NULL,
> Rel char(32) NOT NULL,
> FOREIGN KEY (deleted) REFERENCES T_TRANSACTIONS (TransNum),
> FOREIGN KEY (versionNo) REFERENCES T_TRANSACTIONS (TransNum),
> CHECK (versionNo > deleted));
>
> CREATE UNIQUE INDEX R_TABLE1_1 ON R_TABLE1 (Id, deleted, Rel);
> CREATE INDEX R_TABLE1_2 ON R_TABLE1 (versionNo, deleted);
>
> There are 2 basic reads on this table.
> 1) the latest version of a record
> 2) all the changes since the last read version
>
> In the old system this meant
> 1) select * from r_table1 where deleted = -1 (because every time a
> change was made, it would update the deleted column of the existing
> version = the current transNum and then insert a new version with
> deleted = -1)
> AND
> select * from r_table1 where deleted between 0 and current transNum
> (just in case someone made changes between you starting your read and
> ending it - consistent read of numerous tables)
>
> 2) select * from r_table1 where versionNo > @lastRead and versionNo <=
> @currentTransNum
> AND
> select * from r_table1 where versionNo > currentTransNum and deleted >
> @lastRead and deleted <= currentTrans (again for consistent read)
>
> In the new system it just involves one read for each because no update
> takes place, a new version just supercedes the old version by virtue
> of a bigger versionNo:
> 1) select * from r_table1 t1 where versionNo = (select max(versionNo)
> from r_table1 t2 where t1.id = t2.id and t2.versionNo <=
> @currentTransNum) and deleted = -1
> 2) select * from r_table1 t1 where versionNo = (select max(versionNo)
> from r_table1 t2 where t1.id = t2.id and t2.versionNo > @lastRead and
> t2.versionNo <= @currentTransNum)
>
> In the old system the write transaction used to do:
> 1) Update R_TABLE1 Set deleted = versionNo, versionNo =
> @currentTransNum where id = @id and deleted = -1 and versionNo =
> @versionNo
> ONCE FOR EACH value of Id
> (If 1 record updated then no-one has updated since our read)
> 2) Insert into R_TABLE1(deleted, versionNo, Id, Rel) Values (-1,
> @currentTransNum, @id, @rel)
> for as many relationship tuples as required.
>
> In the new system the write transaction does this:
> 1) Updates DUMMY Set Col1 = 1 WHERE @versionNo = (SELECT
> MAX(versionNo) from R_TABLE1 where id = @id and versionNo <
> @currentTransNum)
> (If 1 record updated then no-one has updated since our read)
> 2) Same insert as old one above.
>
> As far as secondary index is concerned, the application goes out in an
> Oracle and a SQL Server flavour. In SQL Server this additional index
> seemed to improve performance being used in some of the subselects.
>
> The problem is, that the new version of our schema does not perform
> like this under SQL Server, only under Oracle. Obviously in SQL Server
> we use a CLUSTERED INDEX but as far as I am aware that is just the
> same as an IOT.
>
> Thanks for any advice
> Phil

Can anyone help with this?

The Oracle results are not consistent with what we are seeing on SQL Server....

Basically the results are like this:

Old SQL Server vs New SQL Server : new SQL Server better on reads, better on writes. All statistics are consistent. Old Oracle vs New Oracle : new Oracle reads are better, new Oracle writes start off better but quickly deteriorate to very poor results.

However both new Oracle and new SQL Server have had their schemas changed in exactly the same way.
And in the tests Old Oracle and new Oracle were run on exactly the same Oracle configuration while old SQL Server and new SQL Server were run on exactly the same configuration.

The evidence leads me to think that new schema requires more memory/ space and the SQL Server configuration was set up in both cases with enough whereas the Oracle configuration was set up with enough for the old version but not enough for the new.......

Is there another conclusion?

Thanks for any help. Received on Tue Apr 17 2007 - 08:56:41 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US