Re: How to : only one user in a branch can do the batch process at a time ?
Date: Sun, 24 Feb 2008 07:37:32 -0800 (PST)
On 23 fév, 05:31, krisl..._at_gmail.com wrote:
> dear gurus,
> In ADF web application, accessed by multiple branch (20), there is a
> batch process to create invoices from orders. There are 3-5 users in a
> branch. one user can process about 30 - 50 orders to become
> Business rule is : In one batch process, each users must produce
> invoices with gapless invoice number. e.g : user_1 create 30 invoices
> no.001 - 030, user_2 create 40 invoices no.031-070 ... and so on...
> So we want to protect that at a time, in one branch, only one user can
> do the batch process. User_2 must wait until user_1 is done to start
> the process.
> We use the table below to maintain the latest invoice number created :
> CREATE TABLE doc_number
> DOC_TYPE VARCHAR2(6 BYTE) NOT NULL,
> BRANCH_CODE VARCHAR2(6 BYTE) NOT NULL,
> DOC_NBR NUMBER(12) DEFAULT 0,
> CONSTRAINT "DOC_NUMBER_PK" PRIMARY KEY (DOC_TYPE, DOC_NBR)
> The question is :
> I think of locking a spicific row of the doc_number table (select ...
> for update wait 10) at the beginning of the batch process stored
> But what if the session/connection left orphan and the row is lockde
> forever before the dba kill it ?
> So is there a better approach to limit the user execution of the
> process ?
> Thank you very much for your help,
If I understand correctly the doc_number table contains duplicate information from other tables (the latest invoice number can be found elsewhere) and its only purpose is the locking mechanism. In ordinary conditions, I would not be happy with this data duplication.
One solution I would try is to create global temporary tables for all tables involved in the batch process and populate them as the user provides data. At the the end of the process, dump the contains of these temporary tables into the actual ones with the proper seq.nextval (it should take very little time for your 40 invoices.) With the help of a lock on the underlying tables to guarantee no gaps can take place in the generated sequences should the "bulk" insert run concurrently with another one, this should do the trick. Received on Sun Feb 24 2008 - 09:37:32 CST