Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: (Partial) deep-geek discussion

Re: (Partial) deep-geek discussion

From: ksjune <ksjune_at_sys.gsnu.ac.kr>
Date: Fri, 25 Jun 1999 17:27:58 +0900
Message-ID: <37733D8E.EC1274C4@sys.gsnu.ac.kr>


Hi!!!

I'm using NCR5100 Clustering 2 node system and Main Memory size 1G /each
and 4CPU/each
and 120G shared external disk
and Oracle7 OPS
and Tuxedo 6.3.

>
> A) There is also a D1000 array for system disks, $ORACLE_HOME, apps,
> archive logs, etc.

==> Use the external disk for oracle and apps and logs.

       If OS internal disk is crashed, you can save others.

> B) The database consists of several catagories of objects:
> ~40 GB of large tables that are extremely read intensive during critical
> performance periods, but inserted into during an hour-long "batch'ish"
> process.
> ~1 GB of infrequently updated/inserted/deleted data that is read critical
> ~4-8 GB of intensely updated, inserted, and read data - most critical!
> ~1-2 GB of non-persistent(!) data that is inserted -> read -> deleted
>

>
> C) This database sustaines 200 TPS and peaks at 800-2000 TPS.
> It generates 10-20 GB of archive log every day (depending on activity)

==> I think TPS is not important. If you can config the sql statement and

        managing pings, You can get over.
       But archive log is so big.  You should archive directory make a big.
       How about approx 8G? and backup to backup device using cron.

>
> D) Each A3500 has 128 MB cache, 50 9GB 10k RPM drives, 2 controllers
> (Disk space obviously isn't the issue - the spindle count is for speed!)
>
> E) Memory and CPU are tunable - starting at 12 GB & 12 337MHz CPUs
> (Could go significantly larger (~3x) in this domain.)
>
> F) Will probably use 8k Oracle block size (perhaps 16k after testing)
>

==> How about 4k? I think the block is bigger, need more disk...

>
> ---
>
> Assume that everything wil be on raw devices (except archived logs of
> course) since OPS is involved. The questions are:
>
> 1) Cache-trashing seems to be a potential issue here. Since the entire
> array has only 128 MB of cache, what are thoughts about caching only
> the redo logs? What about turning off all read cache and reserving it for
> writes only? Eh?

>
> 2) Online redo logs will be ping-ponged between at least two drives,
> Peak is a 100MB log switch every 45-90 seconds, during less critical
> hours. Most critical hours see 100MB log switch every 2-5 minutes.
>

==> We using 20M redo size and created redo logs approxi 1G / day.

        We cannot feel waiting time while redo log is written.
        I think your redo log size is 50M or below.   Redo is so big if you
        set the 100M.


>
> a) Rule of thumb is a log switch every 15/20/<pick your religion> minutes,
> but are near-gigabyte redo logs even feasible? What are the largest
> you've used for ultra hot OLTP? What is your religion here?
>

==> I've already mentioned, Use the cron.

        (Maybe) Every 6hours, cron can backup the redo log files.

>
> b) Given the nature of redo logs, has anyone really seen significant
> benefit from striping them? (As in ping-ponging between two striped
> n-disk LUNs?)
>
> 3) Assume that 4 of the 50 disks in each A3500 are dedicated to redo logs,
> That leaves 46 disks and 6 available LUNs. It could be set up as
> anything between "stripe everything across everything" ( a newer
> "start-up" religion) and 6 striped LUNs. For example, the latter might be
> 5 LUNS of 8-disk wide stipe sets plus one LUN of a 6-disk wide stripe set.
> Traditionally, I would have gone with something like this, but the appeal
> of "everything everywhere" is intriguing - especially since the I/O is so
> random. Experiences? Religion? Horror stories?
>
> 4) Now, to complicate matters, consider if you had three of the A3500 arrays
> with 50 disks each and one D5000 (?) array (100 MB/sec, but no cache).
> How would you split the redo and the various type of segments across the
> various arrays? Dedicate one A3500 array to redo logs?!!??!?!? (For
> caching efficiency primarily...)

==> I think this is so simple. make 3 - 6 redo log groups and laid one disk array.

       Just my opnion.

>
> 5) Does the introduction of OPS change anything significant in your model?
> (Please! No obvious and generic "partitioning" warnings.)
>
> 6) What would be your optimal stripe size for each segment type?
> (Please specify whether per disk column or per stripe width.)
>

==> I have made 500M each segment, and extended using "add datafile".

       But It's so small. I think it's 1G or 2G each segment.
       It's depent on you.

Hope helpful...

ksjune_at_sys.gsnu.ac.kr Received on Fri Jun 25 1999 - 03:27:58 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US