Re: File Processing Question

From: Andrew Kerber <andrew.kerber_at_gmail.com>
Date: Wed, 29 Sep 2010 11:33:41 -0500
Message-ID: <AANLkTikSTJdOLV6Y+Nqfy0fZSD5r_=_KJxoJ6keordZy_at_mail.gmail.com>



The shared storage addition would be pretty cheap. Just the cost of storage, and you could probably put it on your cheapest storage tier. I would expect that the developers would scream if you asked them to change code, and the cost of changing code would probably be greater than the cost of the storage.

On Wed, Sep 29, 2010 at 11:20 AM, Niall Litchfield < niall.litchfield_at_gmail.com> wrote:

> After the wisdom of crowds here.
>
> Consider a system that processes files uploaded by ftp to the DB server.
> Currently the upload directory is polled periodically for new files (since
> they don't all arrive on a predictable schedule with predictable names). Any
> new files are processed and then moved to an archive location so that they
> aren't reprocessed. The polling and processing is done by java stored
> procedures. This system is a RAC system with no shared filesystem storage.
> The jobs that poll run on a particular instance via the 10g Job Class trick.
> The question that I have is how would you implement resilience to node
> failure for this system. It seems to me that we could do
>
>
> - add shared storage - at a cost probably.
> - ftp the files directly to the db - implies code changes probably
>
> Does anyone else do anything similar and if so how?
>
> --
> Niall Litchfield
> Oracle DBA
> http://www.orawin.info
>

-- 
Andrew W. Kerber

'If at first you dont succeed, dont take up skydiving.'

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Sep 29 2010 - 11:33:41 CDT

Original text of this message