Re: How to tune database with lots of very light queries

From: Zhu,Chao <zhuchao_at_gmail.com>
Date: Wed, 30 Apr 2008 21:13:36 +0800
Message-ID: <962cf44b0804300613v378b2fd7lbc9739f1f13baa61@mail.gmail.com>


Thanks to all replied. So looks like an interesting Topic:) To clarify the situation to those who are interested:

The system are/(should be) well designed, and so far we don't have any performance problem (we upgrade before it hit the server hard limit typically). The reason I raised this question, is mostly listen to all others' comment/opinions, and I do see good comments. Thanks! We are supposed to support the system go support even much higher traffic/user activity. So, we will need lots of more CPUs to support the applications. That could means millions of $$$. So we are request to think about any oppurtunity to tune the system;

There are Dedicated server/Shared server setups. There are application connection pooling. But still excessive connections for some databases that we have to go with shared servers. Of course bind variable type of basic are done nicely, else it won't scale to 10K+ execution.

Dan has very good points; We have pretty big team so developers/archtectures not necessary know the database well(Or they have their own concern/guideline that may lead to non-optimal database usage). DBA not necessary know the application well so not understand why those SQL are called so often. We did have some success case work with developers to find a better way to implement their business needs.

Some very detailed dba change like hash partitioning of busy tables did reduce buffer_gets by 20%-30%(due to decreased btree level) and some IOT are used. But it will be major work for DBA to re-partition/re-IOT so many tables.

Some more detailed questions for open discussion: 1. Any one has experience reduce the cost of soft parsing? Setting of cursor_space_for_time/session_cached_cursor does reduce the load on library cache, but looks like does not save CPU much(earlier CPU used by parse is also pretty low).
2. How is the real world PC (AMD boxes we are targeting) server working compared to Sun/HP/IBM boxes with same amount of Core#? (performance/cost/avalibility wise)? Sun recent Niagara/NiagaraII works great compared to the old Sparc.
Anyone who would like to discuss in detail can email me offline maybe we can share with each other with more details.

On Wed, Apr 30, 2008 at 9:54 AM, Mark Brinsmead <pythianbrinsmead_at_gmail.com> wrote:

> 3 to 6 gets/exec? No complex joins? 10,000 executions per second?
>
> It's pretty tough to beat 3 to 6 gets/exec. At least be working at the *
> statement* level. I would be quite surprised, though, if there were not
> some huge opportunities for performance improvement in the application code.
>
> This sounds (almost exactly) like that I would expect with an application
> written by developers who do not understand *joins*, and code "around"
> their lack of understanding by using loops that issue tens of thousands of
> small queries where a single SQL statement could (should) have been used.
>
> On the other hand, perhaps you are working with a lean, mean, high
> performance OLTP system where everything has been designed, tested, and
> tuned with immaculate care and uncommon skill. If this is the case, you can
> probably just put your feet on the desk and relax until somebody needs
> something recovered from backup. :-)
>
> Okay. Probably not.
>
> You have not said much about actual performance problems. Is the database
> meeting performance expectations? Which business functions, if any, are
> slower than they *should* be? Do you have a statspack report? Can you
> reduce PIOs by increasing the buffer cache or implementing a keep pool? Is
> your IO subsystem slower than it might be? Do you need more CPUs? Or is
> everything just fine? It *could* be...
>
>
> On Tue, Apr 29, 2008 at 8:58 AM, Zhu,Chao <zhuchao_at_gmail.com> wrote:
>
> > We have some databases with load profile like lots of light query
> > (mostly pk based), each execution cost only 3-6 buffer_gets and no complex
> > join etc.
> > But due to excessive executions (10K+ query per second), load is also
> > driving pretty high.
> >
> > Anyone has experience tune such kind of application/database? From
> > single SQL point of view, they are already perfect.
> >
> > --
> > Regards
> > Zhu Chao
> > www.cnoug.org
> >
>
>
>
> --
> Cheers,
> -- Mark Brinsmead
> Senior DBA,
> The Pythian Group
> http://www.pythian.com/blogs

-- 
Regards
Zhu Chao
www.cnoug.org

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Apr 30 2008 - 08:13:36 CDT

Original text of this message