Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: is this an appropriate case for parallel processing?

Re: is this an appropriate case for parallel processing?

From: Antoine BRUNEL <antoinebrunel_at_yahoo.fr>
Date: Sun, 18 May 2003 01:26:51 +0200
Message-ID: <3ec6c53f$0$14406$79c14f64@nan-newsreader-03.noos.net>


re-Hi from Paris

certainly HASH joint are well designed to do PQO. Because these joint will do FTS, which is an operation that can be parallelized, so this can be a good idea to PQO.

size of private memory area of slave processes is mainly determined by sort area size. So, max number of slave processes will depend of the capacity of your machine.

"Ryan" <rgaffuri_at_cox.net> a écrit dans le message de news:5Gvxa.24235$823.18006_at_news1.east.cox.net...
>
> "Antoine BRUNEL" <antoinebrunel/yahoo.fr> wrote in message
> news:3ec681a2$0$12994$79c14f64_at_nan-newsreader-03.noos.net...
> > Hi from Paris
> >
> > and unfortunately, I think you should try ... even in the worst case,
you
> > won't get much performance degradation.
> >
> > Update the 'init.ora' to place PARALLEL_MAX/MIN_SERVER to reasonnable
> values
> > (don't hesitate to put high values for max, even more than number of
> CPUs),
> > cause several sessions could use slave processes.
> >
> >
> > Then, ensure tables / index degree of parallelism is > 0, and your SQL
> will
> > use it:
> > - either by specifying hint PARALLEL
> > - more dangerous, at the instance level, set OPTIMIZER_PERCENT_PARALLEL
> >
> > In my point of view, if think PQO won't help for segments less than 1 Go
> > ....
> >
> > You will have to ensure IO is not a bottleneck, and your CPU are really
> > idles...
> >
> > PQO is a VERY specific option, and must be wisely used in order to get
> > results.... much than people could think of
> >
> >
> > "Ryan" <rgaffuri_at_cox.net> a écrit dans le message de
> > news:a%cxa.18856$823.18445_at_news1.east.cox.net...
> > > We have an instance run in Oracle 8173 on a Solaris v5.8(yes I know
its
> > > antiquated) with 4 CPUs using a Network Appliance hard disk array.
> > >
> > > SQLLOAD into a staging tablespace. Insert append and a create table
AS.
> > >
> > > We have 5-6 tables over 100 MBs and the largest is 1GB. We have up to
73
> > > tables that we have to batch process in a night. We rarely run all of
> them
> > > and about 40 of them are less than 10MBs some less than 100k.
> > >
> > > I was thinking of using parallel inserts,create table as,parallel
index
> > > building, parallel in a hash_aj we need to do. Im also going to use
> > DBMS_JOB
> > > to create indices, since talbes have multiple indices, if I run them
> with
> > > DBMS_JOB I can do all of them at the same time.
> > >
> > > how do I determine whether parallel processing will suck up too many
> > > resources? Cant run it and see what happens because it has to succeed.
I
> > can
> > > test it on a Saturday, but Id prefer not to blow a Saturday if this is
> > > totally going in the wrong direction.
> > >
> > > We currently have MAX Slaves set to 5, and Threads per CPU set to 2.
Not
> > too
> > > familiar with Parallel queries. Just started looking at it.
> > >
> > > Anyone have any advice on this?
> > >
> > >
> >
> All of our tables and indices are in the same datafile. However, Im doing
> HASH_AJs on large table of up to 1GB in size and up to 20 million rows.
Will
> parallelism help with this kind of sorting in memory?
>
> Any downside to setting a high MAX Slaves? Ill have to get the server
> bounced and I dont want to do it if there are negatives. Any suggestion on
> an appropriate setting?
>
> >
>
>
Received on Sat May 17 2003 - 18:26:51 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US