Home » SQL & PL/SQL » SQL & PL/SQL » Real-time concurrency problems (Oracle 10.2, Solaris)
Real-time concurrency problems [message #348269] |
Tue, 16 September 2008 04:47  |
theASPguy
Messages: 2 Registered: June 2008 Location: Switzerland
|
Junior Member |
|
|
Hi all,
I'm having concurrency issues with Oracle trying to insert real-time stock price data. I have a Windows Service (c# .NET 2) that is receiving a feed from Bloomberg.
Individual ticks can come very rapidly and I have on average around 50 updates per second. For background, the application is multi-threaded with update requests sitting in a pool. The worker thread reads the pool queue as connections become available. There's around 25 connections in a pool.
My problem is that using the procedure below, I can have two inserts on the same ID if two transactions on the same instrument id occur concurrently. As I cannot read dirty data (READ UNCOMMITTED) as I can in MS SQL Server, I am unable to see any concurrent insert until the transaction has been committed. This then breaks views joining to the table.
Please advise on how I can resolve this issue, or confirm that the code below is a good method for dealing with the problem at hand.
Thanks in advance,
David Evans
CREATE OR REPLACE PROCEDURE P_PRICE_ENGINE_UPDATE_DETAILS
(
IN_INSTR_ID NUMBER,
IN_PRICE NUMBER,
IN_UPDATED DATE,
IN_DELTA NUMBER,
IN_GAMMA NUMBER,
IN_VEGA NUMBER
)
IS
-- LOCAL VARIABLES
TEST_ID NUMBER;
-- END LOCAL VARS
BEGIN
SELECT ID_INSTRUMENT INTO TEST_ID FROM INSTRUMENT_REALTIME_DATA WHERE ID_INSTRUMENT = IN_INSTR_ID;
-- Above line will cause an exception if the record does not exist.
-- So here we update the existing record.
dbms_output.put_line('UPDATING');
UPDATE INSTRUMENT_REALTIME_DATA SET
LAST_PRICE = NVL(IN_PRICE, LAST_PRICE),
LAST_PRICE_UPDATED = NVL(IN_UPDATED, SYSDATE),
DELTA = NVL(IN_DELTA, DELTA),
GAMMA = NVL(IN_GAMMA, GAMMA),
VEGA = NVL(IN_VEGA, VEGA)
WHERE ID_INSTRUMENT = IN_INSTR_ID;
EXCEPTION
WHEN NO_DATA_FOUND THEN
/*
LABEL FOR EXCEPTION HANDLING
if transactions have caused two inserts for a unique id,
delete the records and insert again.
*/
<<INSERT_AGAIN>>
dbms_output.put_line('INSERTING');
INSERT INTO INSTRUMENT_REALTIME_DATA (
ID_INSTRUMENT,
LAST_PRICE,
LAST_PRICE_UPDATED,
DELTA,
GAMMA,
VEGA)
VALUES (
IN_INSTR_ID,
IN_PRICE,
NVL(IN_UPDATED, SYSDATE),
IN_DELTA,
IN_GAMMA,
IN_VEGA);
WHEN TOO_MANY_ROWS THEN
DELETE INSTRUMENT_REALTIME_DATA WHERE ID_INSTRUMENT = IN_INSTR_ID;
GOTO INSERT_AGAIN;
WHEN OTHERS THEN
-- Consider logging the error and then re-raise
RAISE;
END;
|
|
|
|
Re: Real-time concurrency problems [message #348346 is a reply to message #348271] |
Tue, 16 September 2008 08:23   |
JRowbottom
Messages: 5933 Registered: June 2006 Location: Sunny North Yorkshire, ho...
|
Senior Member |
|
|
I just ran a timing script, and MERGE comes in as consistently twice as fast as the approach the OP is using. If perfromance speed is part of the problem, then that's got to help.
|
|
|
|
Re: Real-time concurrency problems [message #348368 is a reply to message #348363] |
Tue, 16 September 2008 09:39   |
ThomasG
Messages: 3212 Registered: April 2005 Location: Heilbronn, Germany
|
Senior Member |
|
|
So if there is more to the procedure that would prevent a single merge to be used instead of:
* select
* update
Exception
NO_DATA_FOUND
* insert
TOO_MANY_ROWS
* delete
goto insert
You should consider creating an unique index / primary key on ID_INSTRUMENT and then just do an:
* insert
Exception
DUP_VAL_ON_INDEX
* update
|
|
|
Re: Real-time concurrency problems [message #348401 is a reply to message #348269] |
Tue, 16 September 2008 11:23   |
theASPguy
Messages: 2 Registered: June 2008 Location: Switzerland
|
Junior Member |
|
|
Hi all,
Thanks to everyone who has posted a reply. It's much appreciated. I've posted the reply below so everyone can see what my experiences are.
So, I tried the MERGE statement. Thanks to Michael for that, and to confirm JRowBottom, I'm now experiencing update speeds of ~750 records per second. A simply massive improvement in throughput from the service/database as the threads become available much sooner. It's also stress-tested/proven my database connection pooling code too :)
It's also seemed to resolve the concurrency issue as I've had the system running for over 4 hours without an incident. The original proc was guaranteed to raise an exception within the first 5 mins.
Also, from a coding point of view, I don't have to use exceptions as a means of coding, which is a much better approach. To be honest, I hated the original proc, but it was the only way I could figure to code it in Oracle.
Also, thanks to ThomasG for the note on the Index/PK. It was something I considered when designing the table/proc. Technically, for normalization, I know it's something I should do, but considering it was a table with massive updates, I didn't want the overhead of an index or unique constraint adding to the processing time for each insert so I left it off. I have not added an index/PK and just by using the merge, I have no more issues.
Thanks to all for your suggestions.
Regards,
David Evans
|
|
|
|
Goto Forum:
Current Time: Thu Feb 06 23:18:34 CST 2025
|