Re: PRO*C Performance issues- Comments wanted!

From: Lee Parsons <lparsons_at_exlog.com>
Date: Fri, 3 Sep 93 23:17:07 GMT
Message-ID: <1993Sep3.231707.4653_at_exlog.com>


In article <265f2lINN6db_at_uwm.edu> despair_at_csd4.csd.uwm.edu (Craig Andrew Tollifson) writes:
>> You can also stripe with Oracle using multiple tablespace files for ONE
>> table, have initial extent = size of one raw partition, and minextents =
>> number of raw partitions on the create table statement.
>
>* Havn't decided which to do- the strip software supports RAW datafiles. I
>think that the striping software might be more effective though. Any opinion?

I have to believe that a unix striped disk is going be faster given that the "strips" under oracle would be big ?MB and under a striping program would be small ?K. This sez to me that with smaller data chucks you have less of a chance that there will be contention for that chunck.

Of course, if you lose a drive you have to restore the entire db not just a small set of datafiles.

>> Only index pages and data pages read after an index read are cached with LRU.
>> Full table scans go on the "aged" end of the LRU chain because we assume
>> you will see it once. If your data tables are small, try building a
>> concatened index with all the columns needed by a select.
>>
>> Set your multiblock_read_count = 32 (32 blocks / I/O, default = 8).
>
>* I NEVER do a full table scan. At most I might read 32 rows in a single
>set. 80% of the time though, I'm getting just 1 row at a time. Also, since
>most of the time I'm just getting one row (maybe a lot of columns [30-40]
>from that row) wouldn't a small multiblock_read_count be alright?

The dba Guide talks about multiblock being the read size for sequential reads. This implies that it is only used for large full table scans. I remember asking this question of oracle and got the answer back that it was only used when oracle knew that a full tablescan was on the way. So if you never do a full table scan you dont have to worry about it.

This is from fuzzy memory so I would conferm and there is no promise that it still works this way under V7.

>> Sorts and merge joins use the sort area size in memory and when it needs
>> more you allocate temporary extents in the user's temp. tablespace.
 

>> temporary tablespace sizing
>
>* Now that's a good question. I don't think I do anythign with this. Is there
>something I might want to try? What uses the temorary tablespace? Only sorts
>and Joins?

temporary tablespaces are where user process go when they run out of sort area. If you dont care much about the sort_area_size you should care even less about this.

>* Does an EXP/IMP of the entire database clean that up or does IMP only clean
>user tables? Actually, home much DOES EXP/IMP help fragmentation? I've seen
>that most tables after brought in via IMP seem to be only one extention- there
>must be some intellegence there. Would it be wize to EXP/IMP an entire
>database on a regular basis (keeping in mind that my database is "small")?

you shouldn't have to import/export on a regular basis if your storage parameters are not out of wack. Given that your system is "small" though, you shouldn't have any touble doing this in a reasonable amount of time. of course by the same logic you shouldn't have any trouble controlling fragmentation :-}. It is certianly a reasonable way of takeing care of the problem once you have.

>- RAW FILES
>I've actually benched the database with and without using RAW datafiles,
>and in 6.0.27 (ODT 1.1) it seemed to be actually SLOWER to use RAW devices.
>But I still opt for the higher reliability (avoiding the UNIX buffers).

Where does the high reliability come from? Going to the disk via the buffer cache on every version of unix I have been on is guaranteed by oracle to be written to disk if you hit commit. Oracle is going to use the O_SYNC option of write or do a sync() before the commit complete comes back.

Where are we going to get inconsitant data if all commits are guaranteed to be written to the logfile?

>- ODD TABLES
>* A question, I have these two weird tables called INDEX$BADNESS_STATS or
>somthing simular. Are they used in O7 or were they accidently imported
>from a V6 I was using? If I recreate the indexes can I delete them (in v6
>too)? They make me nervous and I keep thinking they are overflow pointers or
>something (even though I don't think BTREE has overflow). Any ideas?

this guy is created by utloidxs.sql, Its job per the Oracle7 Admin Guide B-4 is "Analyes a column of a table, generating statistics that can be used to evaluate its suitability for indexing". Given that it sounds like they are like the bstat/estat tables and you can zap them without a problem.

>* Also, I have not been able to use the SQL*DBA in screen mode as it has some
>really screwy key commands and dosn't seem to work well on SCOconsole. But
>I might just have been to lazy to read the docs. Got any tips?
 

use sqldba lmode=y this will start the guy in the same way it did under v6

-- 
Regards, 

Lee E. Parsons                  		Baker Hughes Inteq, Inc
Oracle Database Administrator 			lparsons_at_exlog.com 
Received on Sat Sep 04 1993 - 01:17:07 CEST

Original text of this message