Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: How to Determine Database Performance Limits

Re: How to Determine Database Performance Limits

From: Connor McDonald <connor_mcdonald_at_yahoo.com>
Date: Thu, 04 Oct 2001 19:16:10 +0100
Message-ID: <3BBCA76A.1C98@yahoo.com>


Ethan Post wrote:
>
> Correct me if I am wrong but even a trace file with event 10046 level 8..12?
> does not actually store the bind variables with the SQL. This means I need
> to have some sort of program that puts all that together. I have James
> Morley's book and I am going on memory on that one. Anyway Pro*C is another
> problem since I know nada about writing anything in Pro*C and don't have
> time to learn cuz my other big project is setting up some application on
> HACMP which is also pretty heavy on the brain. I think I am going to be OK.
> The client requirements far exceed what I thought, my tests show the current
> box might be capable of doing the job but certainly won't meet long term
> requirements. We might be getting some time in the IBM JDE testing center
> to get a much better idea of what we will need. I think I am off the hook
> for the moment :)
>
> Thanks Connor!
>
> - Ethan
>
> "Connor McDonald" <connor_mcdonald_at_yahoo.com> wrote in message
> news:3BBB8D25.355A_at_yahoo.com...
> > Ethan Post wrote:
> > >
> > > Hey Ronald, good to hear from you. GnuMetrics has gone by the wayside
> for
> > > the moment. I still use a greatly modified version of it for all of
> this
> > > testing. You track is the way I am going but I am finding it will be
> pretty
> > > hard to duplicate the application load because it is so poorly designed.
> > > Perhaps if I have 100 processes randomly select from 5 tables, perform
> > > inserts, updates, deletes and full table scans that return no rows I
> might
> > > get close to it!
> > >
> > > - Ethan
> > >
> > > "Ronald" <devnull_at_ronr.nl> wrote in message
> > > news:67ce88e7.0110022259.460585ac_at_posting.google.com...
> > > > "Ethan Post" <Blah_at_Blah.com> wrote in message
> > > news:<AAsu7.19228$Xk4.1286159_at_news1.rdc1.sdca.home.com>...
> > > > > One of my tasks at hand is ensuring our 12 CPU AIX box can process
> > > 150,000
> > > > > lines of sales orders per hour via J.D. Edwards. We have performed
> > > > > extensive testing and determine we will have to move the application
> > > logic
> > > > > to other machines and leave the DB where it is. My job now is to
> ensure
> > > the
> > > > > current box running 8.1.6 is capable of the task at hand. I am
> > > currently
> > > > > focusing on redo because I think that could be a potential
> bottleneck.
> > > > > Based on the numbers we already have during testing and
> extrapolating
> > > them
> > > > > we will need to be capable of the following:
> > > > >
> > > > > 120-160 MB of redo generated per minute
> > > > > 300-400 MB of traffic per minute sent from client (middle tier) via
> > > SQL*Net
> > > > > 600-800 MB of traffic per minute sent to client (middle tier)
> > > > > 1.6-2 million SQL*Net roundtrips per minute.
> > > > > 45-60 thousand commits per minute.
> > > > > 1.4-1.8 million user calls per minute.
> > > > >
> > > > > What will this achieve? An amazing 41 records per second! This
> gives
> > > you
> > > > > some idea of the swell coding going on in a modern day ERP system.
> > > > >
> > > > > Anyway, my question is how the heck do I determine if the box can
> > > actually
> > > > > do this? What is the best way to determine if the current box will
> > > handle
> > > > > the load. I can tell you simulating a load this heavy is hard to
> do!
> > > > >
> > > > > Your advice?
> > > > >
> > > > > Thanks,
> > > > > Ethan Post
> > > >
> > > > Hi Ethan, how about gnumetrics ?
> > > > I would try to emulate the load. Just write a few pl/sql routines that
> > > > insert records in a few tables. Fetch from sequences, update a few and
> > > > commit. Run this from the client using sqlplus, run this from 10
> > > > sessions concurrently and see what happens.
> > > >
> > > > I would use arbitrary tables for this. What is important is that you
> > > > have a reasonable row length and a reasonable mix of tables. Maybe 5
> > > > or 10 tables will do. Give them a sequence, a sysdate and a varchar2
> > > > with random contents, a few indexes. If there are triggers is JDE,
> > > > also make them in the test. Don't forget to also do some lookups. This
> > > > will tell you if it is possible or not. If you cannot get the numbers
> > > > you need in this test you will never be able to get them with the real
> > > > app. If you do manage to get the numbers, you MIGHT be able to get
> > > > them in JDE.
> > > >
> > > > What disks do you have ?
> > > > What network do you have ?
> > > >
> > > > Ronald.
> > > > -----------------------
> > > > http://ronr.nl/unix-dba
> >
> > Do you have the target application ready to go ? You could always
> > capture the application code in trace files - them build some Pro*C
> > programs to duplicate this fire up as many of them as you like...
> >
> > Not a perfect benchmark by any means, but a possible starting point...
> >
> > hth
> > connor
> >
> > --
> > ==============================
> > Connor McDonald
> >
> > http://www.oracledba.co.uk
> >
> > "Some days you're the pigeon, some days you're the statue..."

Not in the SQL, but the bind variables are there, and are linked by their cursor number I think.

hth
connor

-- 
==============================
Connor McDonald

http://www.oracledba.co.uk

"Some days you're the pigeon, some days you're the statue..."
Received on Thu Oct 04 2001 - 13:16:10 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US