Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> comp.databases.theory -> Re: Large dataset performance

Re: Large dataset performance

From: jma <junkmailavoid_at_yahoo.com>
Date: 21 Mar 2007 10:29:05 -0700
Message-ID: <1174498145.120192.84520@e65g2000hsc.googlegroups.com>

> This is not an SQL problem and there is not enough information in the
> question to answer it properly, eg., is there some application
> requirement that the 3.4 M rows be written atomically (all or nothing),
> are 100 users going to do this 100 times per day each, etc., etc..
> There was another comment about fewer commits which would make no sense
> if some transaction notion was involved, in fact it would be dangerous.
>

Hello Paul,

the situation is like this: I have to handle the case where a set of remote clients (between 4-16) need to connect to a system and store the result of their analysis. The result is typically a 100-200MB matrix, but can be more. The number of such matrices would be between 100-200. The clients can write it in a local file and I can have a server parsing that file into a database. I think this is not as elegant as providing the clients with direct connection to the database and have them write their data there. So I am trying to figure a way that the clients can (as usual ASAP) store their data. Going through text based queries is a killer. Even setting up sets of SQL commands takes a lot of time. So I am looking for alternatives, such as writing blobs. But with blobs I have to read them back to memory to find what I am looking for and I am loosing the whole functionality of a relational database. So my question is what are the alternatives (if any)?

BR

jma Received on Wed Mar 21 2007 - 12:29:05 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US