Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Best aproach for multiplatform RDBMS development
In article <MVpR2.33889$_k1.25514_at_news.chello.at>, "Marcus N Hofer"
<markus_at_tk136248.telekabel.at> writes:
>With a proper network 18 records inserted per second should work (depends
>on architecture, data recording tool andrecord size of course).
We have installed Oracle on an alpha 4100 running openVMS v7.1.
Currently, a custom driver was written to retrieve plant control data at 18 times per second using raw ethernet. The driver gets each record placing them into a global section. Once stored there, other processes (such as tracking, data manager) retrieves a record to do its own thing to create flat files, reports, etc.
We want replace the existing flat files with a database tables. I believe I can replace all flat files, reporting programs with Oracle EXCEPT for the custom driver that fills the global section.
I've always assumed that the fastest a database can be updated is once per second. Am I mistaken? Isn't SQL slower than compiled code (such as C)? JPL Received on Fri Apr 23 1999 - 12:12:34 CDT
![]() |
![]() |