SUMMARY: Performance Questions under VMS and Oracle as a PIG

From: Bob Comarow <comarow_at_eisner.decus.org>
Date: 7 Dec 93 11:34:01 -0500
Message-ID: <1993Dec7.113401.1527_at_eisner>


SUMMARY: Summary of responses regarding specific performance questions under VMS. Religious responses were ignored. Some very useful information included.

Subject: RE: Oracle Performance Questions (Under VMS)

In article <1993Nov23.153648.1320_at_eisner> you write:

|>Our Oracle applications being investigated use SQL Forms, SQL Report Writer,
|>SQL Menu, PL/SQL, and SQL.  They are running on a 6630, users are granted
|>working sets greater than 15,000 pages, and we are ringing the bell. 

Just curious - which bell are you ringing? Running out of physical memory? Or just maxing out our working set extents? We have a 9420 with 512 Mb memory, lots of users. All our WSEXTENTS are 32,000.

|>1:  What kinds of things in an application (using the above tools) would
|>require a very large page file quota. ( Users on some applications require
|>a page file quota of 180,000).

The thing with the biggest effect on page file quotas at our site is the size of the sgapad that Oracle was linked with. (Our's is 16Mb). It took me a long time to grasp how the SGA size and SDAPAD size affect the pgflquota, but it's a good thing to know for VMS. I have an article from the IOUW proceedings titled 'VAX/VMS: How Virtual Memory Quotas Relate to the SGA', which explains this pretty well. I'd highly recommend reading this, even for experienced VMS sysmgrs. If you want, I can fax you a copy - let me know.

One important point it discusses is how the SGA's global section gets mapped in when an application first connects. A rough quote from this article:

"When you connect to the database, you first call $CRETVA to creat a virtual address space within the process equal to the size of the SGAPAD. Since you are creating demand-zero pages, VMS charges thost against your pagefile quota. Then you call $MPGBLSC to map thost pages against the SGA global section. Since the SGA pages are global, you do not get charged for them. So you get a refund equal to the number of pages in the SGA."

This explains something I've noticed on our VAXes: When an application starts up, you sometimes need a huge pagefile quota just to get started. A split second later, it releases most of the quota that was just used. The actual running app needs far less actual pagefile than pagefile quota.

Hope this helps.

Joe Kazimierczyk
kaz_at_bms.com


X-News: eisner comp.databases.oracle:4950 From: mloennro_at_se.oracle.com (Magnus Lonnroth) Subject:Re: Is Oracle Simply a Pig - The Worst I've See Date: Wed, 1 Dec 1993 11:44:25 GMT
Message-ID:<MLOENNRO.93Dec1114426_at_demo1.se.oracle.com>

>>>>> "Jim" == Jim Kramer <kramer_at_ash.sps.mot.com> writes:

  Jim> Yes!  Oracle is a GIANT SQUEELER!  However, I'm pretty sure
  Jim> that all relational databases are.  Remember you're dealing
  Jim> with something that spends all of it's time talking to the
  Jim> disk!  This is why DB tuning is SOOOOO IMPORTANT.  The other
  Jim> thing you can do is buy really fast disk drives (fast & wide
  Jim> SCSI, etc).  You really do have to select the right
  Jim> computer/peripherals for the job.  This ain't no DISKO, ya
  Jim> know!

  Jim>                          Jim Kramer Motorola

The original poster was complaining about excessive memory demands by Oracle front-end processes - not I/O contention. Oracle should never be I/O bound, it should always be CPU bound - that's our whole strategy. CPUs get cheaper and faster all the time, disks don't (at least not yet).

Concerning memory usage, Oracle will use what you tell it to use. If you say db_block_buffers=10000, Oracle will happily devour the corresponding amount of memory. If you say sort_area_size=1048576, every Oracle server process will happily gobble up 1 MB for in-memory sorting. If you say open_cursors=255, you are allowing every Oracle server process to devour 255 * ~16k of dynamic memory, etc, etc. If your machine is giving Oracle more memory than it can afford, you will certainly get disk problems due to paging/swapping. The solution is *not* to buy faster disks.

To the original poster: get in touch with your local Oracle office and get some assistance in tuning memory usage. Or (I know you'll hate me for this) buy more memory.

And hey, pigs are *nice*, and smart too.



From: pihlab_at_cbr.hhcs.gov.au
Subject: oink oink
>
> Is Oracle Simply a Pig?

It can be a big Razor Back Bastard if you let it.

We run 7 production databases and assorted Development, Acceptance Testing and Training databases on VAX 9420, 9410, and 6520 plus we support DBA's and applications at 8 other sites ranging from 6620's to 4700's.

>
> While these questions were posted by the data base group, I have
> never seen an application require so much CPU to perform so little
> work.

Our capacity planning people rate it at 1 Oracle user = 3-5 ALLIN1 users.

>
> Is it because of the tools selected?

Not necessarily. I find that it's usually all the bells and whistles being used by the application and the fact that most developers assume that the tools and sql will tune themselves... they don't. We have two major systems that are roughly the same size and number of users but one is much more resource hungry because of the lack of application tuning and user work practices.

Oracle can run super quick and super slow all in the one application.

Our biggest slow down is bad or non-existent application tuning.

>
> Answers to these questions would be greatly appreciated, especially
> by the Oracle employees reading this group.
>
>
> In article <1993Nov23.153648.1320_at_eisner>, comarow_at_eisner.decus.org (Bob Comarow) writes:

>> 
>> Three Oracle Questions by our DB group running under VMS.  Answer all three of
>> them and win the prize-we won't hire you:-)
>> 
>> Our Oracle applications being investigated use SQL Forms, SQL Report Writer,
>> SQL Menu, PL/SQL, and SQL.  They are running on a 6630, users are granted
>> working sets greater than 15,000 pages, and we are ringing the bell. 

SQL*ReportWriter reports should be kept as simple as possible. Have no more than 2 levels of sql. If at all possible have only one sql query and use the features of ReportWriter to do your logical blocking. Filter out all unwanted rows and have the RDBMS (ie the sql query) do any formatting changes and sorting.

Spend a LOT of time TUNING your sql statements! Check that your INDEXES are being used effectively. If you are using Oracle7 then make sure you have run ANALYZE against each table/index to gather stats for the cost based optimiser.

>> 
>> The questions posed by our DB group:
>> 
>> 1:  What kinds of things in an application (using the above tools) would
>> require a very large page file quota. ( Users on some applications require
>> a page file quota of 180,000).

A problem with VMS is that if you don't have sql*forms and the other products linked into sql*menu then everytime you invoke one from sql*menu you are causing SPAWNing and the sub task gets half your available page file quota.

If you SPAWN a few times you run out of space because the SGA is mapped into EACH product. If the sql*plus and sql*Forms products are linked into sql*menu then it doesn't SPAWN and you save both work and memory.

Install the product executables as shared images to save lots of memory.

The size of your SGA is important. If you link the kernel at 32Meg say but only use up 16Meg when you start the database you are still reserving a virtual chunk of 32Meg per user process. This is not normally a problem if you set your virtual page size per user to be something very large. Don't panic about giving the users a large Working Set Extent and a large Virtual Page Quota as the vast majority is shared SGA and executables. If you don't use all of your SGA then consider using it or relinking your kernel smaller.  

>> 
>> 
>> 2:  In SQL Forms 3.0 , what is the relationship between amounts/location of
>> memory used and the "Records Buffered" setting?

Not sure of the exact byte relationship but it depends on the size of each ROW you are buffering. I think you need to multiply the number of rows by the row size and add some small overhead for counts and pointers. Set your record buffering to just a little more than the average number of rows retrieved by the sql statement. This ensures optimum use of memory. If the user regularly exceeds this setting then extra records are buffered 4o the users sys$login directory (temporary file). Normally the records buffered will be small per query if the application is designed properly and the defaults should be fine.

You will probably experience performance problems if the user reads LOTs of rows that get buffered to their disk area OR if the buffered area is very large and the user's process starts paging heavily to manage the buffer.  

>> 
>> 
>> 3:  How do LONG fields affect performance with regard to array fetching, record
>> buffering, etc.

Each row is buffered as it is passed to the sql*forms tool so the only resource hungry part should be how much of the LONG is populated.

If you are using sql*forms as an interactive user access tool then you should be trying to minimise the rows returned by each query to guarantee performance. I have seen developers create sql*forms routines that are doing bulk updates and such with counts displayed and such. sql*forms is not effecient for this sort of work. Use sql*plus and PL/sql for this sort of bulk work.

If you are running client/server (or two task - not advised unless you have sql*net V2) then you MUST reduce the number of trips to the database and the volume of returned rows across your network.

Find where your bottlenecks and high resource usage are and tune them. If you have a lrage SGA and are doing lots of full table scans on tables mostly held in the SGA then you are wasting CPU cycles scanning buffers.

Is your system paging heavily, cpu bound, or disk bound? How many users are you trying to support? What size is your SGA? How much memory do you have?

I've done some serious O/S tuning here to meet Oracle's needs.

>>


  • Bruce Pihlamae -- Database Administration *
  • Commonwealth Department of *
  • Health, Housing, Local Government & Community Services *
  • Canberra, Australia (W) 06-289-7056 *
  • These are my own thoughts and opinions, few that I have. *


From: dlogics!dlm_at_uunet.UU.NET (Dave Mausner) Subject: RE: Oracle Performance Questions (Under VMS)
>1:  What kinds of things in an application (using the above tools) would
>require a very large page file quota. ( Users on some applications require
>a page file quota of 180,000).
>

 SGA size, then process memory usage.
>
>2:  In SQL Forms 3.0 , what is the relationship between amounts/location of
>memory used and the "Records Buffered" setting?
>

Direct relationship. The more records you buffer, the more memory used by the process.
>
>3:  How do LONG fields affect performance with regard to array fetching, record
>buffering, etc.
>

LONG fields, as usual in oracle, cause problems because they generally are responsible for row-chaining. if you intermix LONG data with lots of short data, even if you only display the short stuff you still have to plow thru the LONG data to get at the rows. this is not the tool's fault. the best bet is to put LONG data in a table by itself, with minimal primary key info, and put all the other stuff you need in a parallel table.

as for forms, long data handled as LONG (not char) will usually disable buffering due to the unlikelihood that you can buffer N rows (N large) each containing a 64K LONG array.

--
Dave Mausner / Sr Consultant / Datalogics division of Frame Technology Inc
441 W Huron / Chicago IL 60610 / +1-312-266-4450 / "Just show me the code"


----------------------------------------------------------------------------
Subj:	Oracle pigs

From: mloennro_at_se.oracle.com (Magnus Lonnroth)
Reply-to: mloennro_at_us.oracle.com
Message-id: <9312011726.AA00425_at_demo1.oracle.com>
Content-transfer-encoding: 7BIT

>>>>> "Bob" == Bob Comarow <comarow_at_eisner.decus.org> writes:

  Bob> Is Oracle Simply a Pig?

  Bob> While these questions were posted by the data base group, I
  Bob> have never seen an application require so much CPU to perform
  Bob> so little work.

Guess I missed the CPU bit here. As I understand the included message
(also by you) the problem was paging (actually page-file quota). Wouldn't
excessive paging also put a heavy load on the CPU(s) ?

  Bob> Is it because of the tools selected?

I don't think the tools themselves can overload your processors. Lots
of people are running both old and new versions of all our tools on
much smaller machines. Of course, it's pretty easy to bring just about
anything to it's knees with Oracle (just ask the wrong question). I
would suggest you try tracing a couple of your applications. This will
clearly show if load is on the server side or in the applications. The
trace-file contains statistics for all server-operations, so if they
are good, you should concentrate on the applications.

  Bob> Answers to these questions would be greatly appreciated,
  Bob> especially by the Oracle employees reading this group.


  Bob> In article <1993Nov23.153648.1320_at_eisner>,
  Bob> comarow_at_eisner.decus.org (Bob Comarow) writes:
  >>  Three Oracle Questions by our DB group running under VMS.
  >> Answer all three of them and win the prize-we won't hire you:-)
  >> 
  >> Our Oracle applications being investigated use SQL Forms, SQL
  >> Report Writer, SQL Menu, PL/SQL, and SQL.  They are running on a
  >> 6630, users are granted working sets greater than 15,000 pages,
  >> and we are ringing the bell.
  >> 
  >> The questions posed by our DB group:
  >> 
  >> 1: What kinds of things in an application (using the above tools)
  >> would require a very large page file quota. ( Users on some
  >> applications require a page file quota of 180,000).
  >> 
If you try using a 2-task driver, the Oracle server will run in a
seperate process (on VMS, single-task is the default). This will
clearly show what is paging. Please note that using 2-task drivers
will increase the load on your machine. There have been several
memory-leaks in both older VMS versions and client-side PL/SQL,
Forms v3 and Menu v5. Please verify with your local Oracle office
that you are using the latest versions/patches. ReportWriter generates
temporary files for buffering. In Oracle V6 with the old tools, a
complex application can easily require several MB per user at runtime.
In v7 and new tools, both SQL and application definitions are shared,
so the memory requirements per user is *much* less.
 
  >> 
  >> 2: In SQL Forms 3.0 , what is the relationship between
  >> amounts/location of memory used and the "Records Buffered"
  >> setting?
Form v3 has a pre-defined buffer that is shared evenly by all blocks in an
application. You can change the amount of buffer-space reserved for
a block by specifying "Records buffered". If you specify this for
all blocks, the total buffer-space will increase accordingly. You
can specify a runtime switch to buffer with temporary files instead. 
  >> 
  >> 
  >> 3: How do LONG fields affect performance with regard to array
  >> fetching, record buffering, etc.
I don't think array processing is possible with LONG columns. I could
be wrong about this though.
  >> 
  >> 

I've found that the most common error in Forms v3 applications is to
use PL/SQL as if it were executing in the kernel. Processing large
volumes of rows in Forms PL/SQL cursors is *expensive*, since there
is no array-interface. With forms v3.0.16 and above, you can call
stored database-procedures (Oracle7) instead.
The most common error on the database side is to overdimension the
SGA. Reducing db_block_buffers by an order of magnitude can free up
a *lot* of memory *and* processing power.
                                                        

--------------------------------------------------------

From: Magnus Lonnroth - Oracle Sweden <mloennro_at_SE.oracle.com>
Subject: RE: Oracle pigs


> This is very interesting. Our Oracle DB said
> "Why?" Please exaple why db-blcok-buffer would be "Too Large"
> and how reducing tehis would make a differnece in performance.
Well, that's kind of hard without getting into a really lengthy and over- technical discussion. But look at this way: the dbwr and oracle-servers are doing a lot of "small" jobs in the db-block cache: maintaining a least-recently-used list, hash-index, dirty-list etc. All of these "jobs" have init.ora parameters which are usually just documented as "do not touch". Actually, almost all of them are dimensioned according to the values of other parameters, such as db_block_buffers, processes, transactions etc. The bottomline is that if you for example set processes to 1000 on a microVax, a lot of other structures will become so large and unmanagable, that the microVax is brought to it's knees. More specifically, oversizing db_block_buffers can easily put an enormous load on dbwr. This is because it may be using a lot of it's time looking for dirty blocks. The number of blocks it will look for grows with the number of db-block buffers. This will happen at least every 5 seconds (dbwr timeout), unless the dirty-list is large enough to satisfy dbwr. This is just one example, but it's easy to identify: dbwr uses a lot of cpu even when there is little or no database activity. Here is what you want to achieve: an adequate db-block cache hit-ratio at the lowest possible memory-cost. Do this by decreasing the value of db_block_buffers to 100 or 200. Set db_block_lru_statistics=true and db_block_lru_extended_statistics=500 in init.ora. Then run a couple of your applications, or better: let your users work as usual for a couple of hours, and then examine the hit-ratio. This is done by: select n.name, v.value from v$sysstat v, v$statname n where n.name in ( 'db block gets', 'consistent gets', 'physical reads' ) and n.statistic# = v.statistic#; The hit ratio is 100-(physical reads/(db block gets+consistent gets)*100) You can write this in one select-statement, but it's kind of hairy involving several sum(decode(...)). If your hit-ratio is > 90%, examine x$kcbcbh to see the effect of reducing db_block_buffers. x$kcbcbh has one row per block tallying the number of times each block was used, e.g. if block 199 has a value of 50, it means that if you reduce db_block_buffers by 1, you would have gotten an additional 50 physical reads during the test. If your hit-ratio is < 90%, examine x$kcbrbh to see the effect of increasing db_block_buffers. It's important to understand why your hit-ratio may be low. It is usually because you don't have enough buffers, but theoretically, it could also be because your applications are simply not re-reading any data. This is pretty unusual, but if it is the case, then increasing the number of buffers will not do any good. If your hit-ratio is not improved by increasing the number of buffers, DON'T CONTINUE TO INCREASE IT! Anyway, try to keep your SGA as small as possible and maintain a good hit-ratio. Disable lru-statistics when you are through testing, since they increase the load a lot.
> When one uses entirely embedded SQL with C code, is the SQL interpeted?
Yes, SQL is always parsed at run-time regardless of what interface you are using: embedded sql, oci, oracle tools, odbc, ... Parse-time is optimized in v6 by increasing the init.ora parameters returned by this query (after the db has been running for a couple of hours): select parameter, count from v$rowcache where usage=count and getmisses/decode(gets,0,1,gets) > 0.05 and gets > 0; In v7, the dictionary row-cache is dynamically adjusted and there are no init.ora parameters to worry about.
> Can one get to completely compiled code using oracl%?
Only in Oracle7 stored procedures, packages and functions. The PL/SQL code is compiled into DIANA code, and transformed into P-code at run-time. Dont ask me what this is. hopes this answers some of your questions, Magnus Lvnnroth, Oracle Sweden Tech. Sales & Consultant Internet: mloennro_at_oracle.com Phone: +46 31830325 >I'm sitting up to my knees in snow on the wet and cold west-coast of Sweden dreaming of bikinis and Singapore slings and feeling pretty depressed about it. ================================================================== > ... stuff deleted ...
>
> Actually, my concern was the amount of CPU required for the tiny amounts of
> I/O. Some of the question's posted by our DB group involved questions
> that involved memory.
>
> But, just a few Oracle users ate up the CPU of a 6630. A few Oracle report
> writers ate up a 4000-400.
>
> And, yes, we have had Oracle consultants tell us our system is well tuned,
> the DB is well tuned, and we've had tons of memory thrown at the beast.
> Our SGA's are huge. Our working sets incredible. that was all documented
> in the original note.
A well tuned O/S and a well tuned database (internals) will give you about a 10% improvement in throughput on a badly tuned O/S and database combination. Tuning an application's database access can give you a 1% to 1000+% percent improvement. Tuning Oracle is a balancing act. You balance I/O vs CPU vs Memory. One of them will be your bottleneck no matter what you do. What you need to do is push each as close to "the wall" (85% on a VAX) as possible while ensuring that CPU becomes your bottleneck. You mention giving it a lot of memory (in SGA) and running out of CPU. This simply means that you've reduced your database I/O at the expense of using more memory and hitting the wall on CPU. If you have a large SGA and each process is using bulk CPU then it sounds like your doing inefficient row scans (or full table scans) and most of them are in the SGA buffers. If this is the case then it's your application that needs tuning. Have you turned on trace and timing for your database and then run the trace files through the TKPROF utility? This gives excellent feedback on how each database call is being processed. Also shows the execution path and what indexes are being used. ReportWriter can be a dog if you haven't written the report with performance in mind. * Bruce Pihlamae -- Database Administration * * Commonwealth Department of * * Health, Housing, Local Government & Community Services * * Canberra, Australia (W) 06-289-7056 * ------------------------------------------------------------------------- From: Monty Tompkins <TOMPKINS_at_SNYSYRV1.BITNET> Subject: large pgflquo To: Multiple recipients of list ORACLE-L <ORACLE-L_at_CCVM.sunysb.edu> Our default page file quota on our vax 6420 is 25000. This is sufficient for one Oracle image (our users are mostly using runmenu50). Some of the Oracle products require more pgflquo. We have found that sql*textretrieval needs about 75000 pages for some of the administration and designer functions. Casedesigner needs 100000. Case needs more because it presents multiple views simultaneously. Usually the reason for needing the extra pgflquo is that a form or menu is calling another form or menu externally (via a "host" command). We have some applications that call sqlplus as a host command. We have increased our Oracle users to pgflquo=50000 to cover "normal" host commands from a form or menu. Usually the higher page usage only occurs for a short period (the host command does a quick operation and then returns control to the form or menu). * Monty Tompkins * Phone (315) 464-4130 * * SUNY Health Science Center * Internet tompkins_at_vax.cs.hscsyr.edu * * Syracuse, New York, 13210 (USA) * Bitnet tompkins_at_snysyrv1 * --------------------------------------------------------------------------------
Received on Tue Dec 07 1993 - 17:34:01 CET

Original text of this message