Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: performance blob

Re: performance blob

From: Alberto Dell'Era <alberto.dellera_at_gmail.com>
Date: 7 Nov 2004 02:35:35 -0800
Message-ID: <4ef2fbf5.0411070235.30c1fedc@posting.google.com>


Well, first of all, storing .dat files into an Oracle database has the definitive advantage of data protection - if the db is professionally managed (backups, mirrored online redo logs, archivedlog etc) you can recover from any hw or sw failure, or better, you can get as sophisticated as you like in order to lower the probability of data loss. Oracle is excellent in doing that, and heartsignals are definitely a type of data that i would protect - imagine phoning a patient "ahemm mr Smith, we lost your records ... we must repeat your session on the exercizer..".

As far as performance is concerned, you may also try sqlloader, which is very likely more performant than custom Java code for loading BLOBs
(you can load over the network with it). Or, if you can copy the .dat
on the server, you can easily load it as a blob using dbms_lob.loadfromfile().

In Java, consider writing in exact multiples of the chunksize of the blob; you can control the "writing buffer" size using the lob Java apis. This can have a dramatical impact on performance.

Consider also playing with the CACHE attribute of the blob. If caching, you are writing in the buffer cache (memory), so your perceived speed is that of memory (DBWR will asynchronously write to disk for you, perhaps later, and Oracle guarantees that your committed data will eventually gets written to disk even if the machine aborts, which is not guaranteed by every OS filesystem). Clearly, if you are massively loading, so that DBWR can't keep up with your speed, you will just end waiting for DBWR to clean a full cache - in that case go for a NOCACHE blob, so that you will write directly to disk, bypassing the buffer cache, which is more efficient (uses less resources) even if your perceived speed is that of disk, not memory. Try both and see.

If you're going to process the data - you may also want to consider loading the .dat file as a table (eg one row for every sample). If that's the case, consider using 10g, with includes support for fast IEEE 754 floating point operations (binary_float, binary_double datatypes) and analyzing your data using pure sql (very very fast) or perhaps a bit of pl/sql (that has been improved as well in 10g). Take a quick look here for an example of math operations:

http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:26019773543004#26024461840033
(read the next followup too for binary_double)

For a Thesis I would consider 10g for sure, even if i didn't need the IEEE 754 functionality (10g is a better 9i for common operations).

I don't know about using matlab in oracle - but i feel like that extending intermedia is best left to an Oracle/matlab team :)

hth
Alberto Dell'Era Received on Sun Nov 07 2004 - 04:35:35 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US