Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: mounting shared file system for Oracle and multiple machines

Re: mounting shared file system for Oracle and multiple machines

From: <prochak_at_my-dejanews.com>
Date: 1998/09/24
Message-ID: <6udkkr$8c3$1@nnrp1.dejanews.com>

In article <6u9lfl$5u7$1_at_juliana.sprynet.com>,   "Steve Perry" <sperry_at_sprynet.com> wrote: [snip]
>
> Now, I have a question about having a common shared area for "utl_file_dir"
> access for Oracle on multiple machines.
>
> Currently, we create extract files, ftp them to another server, run a stored
> procedure (or package) that uses the utl_file_dir (I think that's the one)
> and they read and/or write files to that directory and then ftp and on and
> on and on...
>
> I got a request to have a staging area (group of disks) setup that can be
> NFS mounted by multiple machines and look local to them (i.e.
> /u09/stage_data). Many machines would could mount this and I would set the
> "utl_file_dir" to point to it.
>
> Our AIX admin said it's possible, but I don't think it's a good enough
> reason to do it. I'm not an AIX expert and don't know the
> impact/overhead/risks of doing it. From a dba point of view, I'm looking for
> opinions. I see a couple of problems like:
> what happens if the disks go down? Multiple machines are affected.

Some impacts are: more work for your admin, less work for your programmers, better performance(NFS is nearly as fast as a local disc), and NFS is about as reliable as the local file system.

you are right on this, it is a single point where its failure can potentially stop all your systems. That depends greatly on how your system uses these files though. Are they truly critical, or are they just reports? If just reports, then can they be generated later?

OTOH, if your procedure cannot write the file means that your entire application on that machine cannot work, then the costs of failure are greater. One workaround, in the case of the remote mounted directory going away, you can temporarily mount a local disc with the same directory structure essentially bufferring the data for transfer (FTP) later. (that's a survival plan) Do you often have problems with failed systems/discs? Or are you simply not able to buy all the disc space you need?

Are the FTP's going between multiple machines, ie sharing information in a structured but dispersed manner, OR do the majority of files tend to go to a single machine? If its the later case, then you already have a single point of failure. Put the disc drive on that machine and you haven't increased your risk much.

> Security??? I'm not sure on this

not really an issue, if they can read the files on the remote mount, they are reading what looks like a local directory. And that means they are reading your other local directories. The FTP doesn't help because you end up having two copies, the original and the FTP'd one.

If you are concerned about packet sniffers on the network, FTP is not more secure than NFS. You've got bigger problems if you think snoppers are checking your network traffic.

> Troubleshooting? I can see it being a potential problem that's not obvious
> to figure out

UNIX remounted discs have been a stable technique since the 80's. Your admin shouldn't have any trouble with it. If the procedure fails to write the file, the problem is limited to a few possibilities: disc failed (full or crashed), remote host failed, network failed. It's not that hard to troubleshoot.

> disks fill up? potentially all instances sharing it are affected

Either you FTP, meaning you write the files locally and copy later. This potentially is more expensive, requiring twice the disc space, even though it is spread on several machines.

Or you remount mount, meaning the files exist once on the network. less disc space required, so less likely to fill up the discs.

> garbage can or heavy maintenance??? it becomes a trash can and I'm unable to
> determine what I can clean up

How do you clean up now? you must have some protocol for naming these files.

> more or less network traffic??? I don't know. I can't imagine you eliminate
> the ftp. It's a sql net transfer instead I guess.
>
> From the programming staff, they thought because it would look local it
> would eliminate ftp's. Like I said above, data still has to be passed back
> and forth. I don' t think sqlnet is faster than ftp...

It's an NFS transfer. Overall net load probably about the same. if you fire off all the FTP's at once your get about the same load as running all the file generators at once over NFS. The FTP might be slightly less load on the net, but it pays a time cost of writing the file to disc twice, once on the source machine and then on the destination.

>
> Any input would be apprecated.

 (appreciated, hey you asked for input! 8^)
>
> Thanks,
> Steve
>
>

Are the files really batches of individual changes that must move from one database to the others? or is the entire file basically one transaction? One table or many? Is the FTP copy a manual process or program driven? these affect your choices as much as the network protocol.

Have you considered other methods? There are other ways of sharing the data.

**Are all the machines running ORACLE? There are data replication products from ORACLE that can handle this. (You have SQL*NET right?) Why FTP when you could use the ORACLE COPY command? A SQL script may be all you need. The procedure writes a table and the script copies it.

**Are the machines running different OS's or databases? there are other commercial products that can help. You mentioned using AIX boxes. Since you have IBM equipment, how about checking the message middleware:  MQ Series from IBM? It can transfer the information for you though you have to write programs to put the data on the queue and remove it on the other end. But they could be in a ProC (or other language) program that are the exact equivalent of your current PL/SQL procedures. There are other middleware products from other sources too.

**I assume you have already eliminated writing your own low level (sockets, rpc) programs to achieve this.

**Have you also considered the data conversion/interface needs for your system? If you are running different databases, then you may need to translate the files at some step in the process. Since you chose to transfer files, I assume this is because of different products using the data on the different machines.  <shameless plug> My company provides such conversion software and development services. call my office. </shameless plug>

Like any complex issue, there are solutions with advantages and disadvantages. Your solution may not be as risky as your think.

Good luck.

--
Ed Prochak
Magic Interface, Ltd.
440-498-3702

-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/rg_mkgrp.xp   Create Your Own Free Member Forum
Received on Thu Sep 24 1998 - 00:00:00 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US