Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: HELP - Setting up Disk I/O subsystem for a large database

Re: HELP - Setting up Disk I/O subsystem for a large database

From: Jim Kennedy <kennedy-family_at_home.com>
Date: Fri, 30 Nov 2001 00:02:17 GMT
Message-ID: <d0AN7.2036$726.842813@news1.sttln1.wa.home.com>


Basically, what you have is a physics problem. (as it looks like you have discovered.) The disk subsystem has some upper limit of IO's per second. So you can do one of three things:(assuming IO is the problem which it probably is)

  1. Increase the amount of IO's per second that the disk sub system can handle.
  2. Decrease the number of IO's per second requested.
  3. Some combinationof 1 and 2.

BTW, #2 sounds silly but one way to accomplish it is to have Oracle access the data more efficiently. (or run fewer reports :-) but that defeats the purpose of the system.) Looks like you are trying to do that.

In fixing performance problems you need to put a fence around the problem and keep making the fenced in area smaller as you either eliminate or measure the problem areas. That is you could be flooding a variety of parts of the Oracle system with work and other parts might not be. For example, if the temp tablespace and the rollback tablespace were on the same physical disk then that disk might be flooded with IO (and using your OS's tools it might show that). Upon further inspection you might find out that the temp tablespace is the culprite. So you can move it to another physical disk drive or try to modify the system to do less to disk sorting for example.

So there is no easy recommendation for how to split it up except to say that you need to know where the IO problems are, how much each of your disk subsystems can handle and then move things around (or add hardware) to minimize contention.

Certainly, partitioning the information , if done properly, can help a great deal.

Jim

"Jim Harrington" <jharrington_at_accessdc.com> wrote in message news:u0dfvg5vull6ab_at_corp.supernews.com...
> Hi,
>
> Let me apologize ahead of time for the lack of precision in what I'm going
> to ask, I come from a development background and have recently been
> introduced to what (for my experience) is a large database. The following
> is longwinded, but I want to provide as much explanation as I can get the
> more specific answers.
>
> The database in question three logical layers.
>
> First is the equivalent of several 'star' schemas, each for one of the
> driving entities that the database holds. By 'star's schema, I mean that
> there is a table that hold one of the driving entities and several detail
> tables that are related to the driving entity table via foreign keys.
This
> part of the overall schema is accessible via a web application for
> maintenance purposes, so overall this part of the schema must be indexed
for
> on-line access. One of the driving entities represents customer accounts,
> of which several million are possible. Another driving entity represents
> securities trades and contains up to 4 years history at about 9 million
per
> year.
>
> The second level is a series of materialized views, which are drawn from
the
> tables in the on-line schema to join related driving entities and speed
the
> summarization of the data contained in the on-line database. The part of
> the overall schema is accessible only through reports, so it is indexed
more
> like a data warehouse.
>
> The third level is a set of regular views that are setup basically one
view
> per report to make report writing easier.
>
> The issue is the size of the database when fully implemented. We estimate
> that a development instance of this database will weigh in at about 40GB
> (much less history stored) and a production instance at about 250GB.
These
> sizes are vastly larger than what was originally planned, so our server is
> experiencing difficulty and we're looking at how to re-configure it to
> improve performance. We are starting to do things like partitioning
> indexes, tuning back redo logging and strategically placing database data
> files, but don't think these alone will do the trick.
>
> As the server is currently I/O bound (only 4 SCSI controllers for 250GB of
> disk on ~20 disks), we are focusing on the disk I/O system first. We've
> found the Oracle documentation on designing and tuning for performance and
> have not really found what we need. Don't get me wrong, there is a huge
> amount of data and many courses of action contained in the doc, but little
> in the way of rules of thumb. As an example, I seem to remember that it
> used to be recommended that table and index data be on separate I/O paths
> (different disks or better yet, different disks on different controllers).
> We will eventually get to the level of application knowledge required to
use
> the Oracle tuning documentation only if we can get the app to perform well
> enough to get management to continue it's development.
>
> All that said, are their any rules of thumb that I can apply to setting up
> the server to maximize database performance?
>
> Thanks,
>
> Jim Harrington
>
>
Received on Thu Nov 29 2001 - 18:02:17 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US