The COMMIT_WRITE parameter
Can this parameter really boost performance? This simple test suggests that it can.
The default commit processing model is that when a session issues a COMMIT, it places a commit record in the log buffer and then hangs while the log writer writes the log buffer to disc. Only then is the commit complete message returned to the session, and it wakes up. This is the D of the ACID test: no transaction is considered to be committed until the changes vectors need to redo it (and to undo it) are in the online redo logfiles. It is also the ultimate bottleneck in the Oracle server: you cannot commit transactions faster than the logwriter can flush that buffer to disc. If your application is running too slowly because you are losing huge amounts of DB time in the log file sync wait event, you have hit this problem.
In earlier releases, you could try to tune away log file sync by reducing your log buffer size, so that there would be less to write on commit. Our freedom to do this has been reduced with the current structure of multiple strands of redo, and in any case if you made your log buffer smaller you might fix log file sync, but then cause the log buffer space event: trading one wait event for another. So often the only options are, firstly, to adjust the application so that you commit less frequently. Ie, adjust your business logic to fit a limitation of the database. Not a good solution. Or secondly, go to RAC, so that you can have multiple log writers. A huge and expensive change.
Or there is a third option: set COMMIT_WRITE. The default is IMMEDIATE,WAIT meaning that on COMMIT, log writer writes immediately and your session waits for the write to complete. Setting it to BATCH,NOWAIT means that the log writer can decide when to flush the buffer and that your session assumes that the transaction has been committed immediately and carries on working. What is the performance impact? Here is a not-very-scientific simple test, using 126.96.36.199 on my Sony laptop:
orclz> orclz> connect scott/tiger Connected. orclz> create table t1 as select * from dual; Table created. orclz> set timing on orclz> orclz> alter session set commit_write=batch,nowait; Session altered. Elapsed: 00:00:00.01 orclz> begin 2 for i in 1..100000 loop 3 update t1 set dummy='X'; 4 commit; 5 end loop; 6 end; 7 / PL/SQL procedure successfully completed. Elapsed: 00:00:09.26 orclz> orclz> orclz> select TOTAL_WAITS,TIME_WAITED from v$session_event where event='log file sync' 2 and sid=(select distinct(sid) from v$mystat); no rows selected Elapsed: 00:00:00.06 orclz> orclz> alter session set commit_write=immediate,wait; Session altered. Elapsed: 00:00:00.01 orclz> begin 2 for i in 1..100000 loop 3 update t1 set dummy='X'; 4 commit; 5 end loop; 6 end; 7 / PL/SQL procedure successfully completed. Elapsed: 00:00:25.84 orclz> orclz> select TOTAL_WAITS,TIME_WAITED from v$session_event where event='log file sync' 2 and sid=(select distinct(sid) from v$mystat); TOTAL_WAITS TIME_WAITED ----------- ----------- 100000 1476 Elapsed: 00:00:00.13 orclz>
A performance boost of 279% ain't bad. Against that, one would have to set the possibility of losing work. In a BATCH,NOWAIT environment it is possible for an application to COMMIT, and then for the server to crash before the commit record is written to disc. That means you have lost the transaction: on startup it will be rolled back, even though it was committed. In an extreme situation, is performance worth that risk? An interesting decision: one for the business people, not the DBA.
Oracle Certified Master DBA