OCI: Using OCIStmtExecute with multiple rows at once

From: Rolf Unger <rolf.unger_at_ctilabs.de>
Date: 4 Aug 2004 04:34:02 -0700
Message-ID: <32fe19ad.0408040334.321a7bb8_at_posting.google.com>



Hi,

I have an multithreaded OCI application on an Oracle 8.05 client connecting to Oracle-8 and Oracle 9i databases.

During initialisation phase I read some smaller tables directly into a structured memory block of the application.

The steps that I take are (statement allocation is left out):

 OCIStmtPrepare for "select count(*) from table_a"  OCIBindByPos host variable 'nSize' for position 1

 OCIStmtExecute( svc_h, stmt_h, err_h, 1, 0, 0, 0, OCI_DEFAULT)

directly read the number of lines from 'nSize'

Allocate a memory block: "number of lines" * sizeof structure

 OCIStmtPrepare for "select col_a, col_b from table_a"

 OCIBindByPos      host variable 'p_data[0].a' for position 1
 OCIBindByPos      host variable 'p_data[0].b' for position 2

 OCIDefineArrayOfStruct for 1. OCIDefine pointer  OCIDefineArrayOfStruct for 2. OCIDefine pointer

 OCIStmtExecute( svc_h, stmt_h, err_h, nSize, 0, 0, OCI_DEFAULT)

That's all, no OCIFetch needed here.
All rows from the table appear as array elements in the memory block.

That is running as a charme since several months (allthough the server runs twice through the table once for the "select count(*)" and another time to fetch the data).
Table sizes were something like between 10 and 300 lines.

Now I have tables with 30 000 lines. The above method is still working. Of course to transfer 30 000 lines over the network takes some time, but it is not more than 3 or 4 seconds.

The basic question is:
  IS THERE SOME LIMIT FOR THE FOURTH PARAMETER OF OCIStmtExecute?

Maybe with table sizes like that it is better to use eg. 3000 as the parameter for OCIStmtExecute. And then do several fetches after the execute?
Will the fetch return multiples rows, after I did a array bind? Or will it still return row by row?

Any reports on experiences with the sort of thing are welcome. Advices, that it is simply stupid to keep a table with more than 30 000 lines in the clients memory cache are ignored, if the do not contain more than this statement ;-)

Thanks, Rolf. Received on Wed Aug 04 2004 - 13:34:02 CEST

Original text of this message