RE: Single TS vs Multiple

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Mon, 28 Dec 2015 22:25:59 +0000
Message-ID: <CE70217733273F49A8A162EE074F64D9282B99CA_at_EXMBX01.thus.corp>


I am a little surprised by the following comment:

I would be willing to bet raid 5 will give you better overall performance if your database is mostly read. Raid 5 is very fast for reads.

Given that RAID-10 includes mirroring, a "read-only" system would have two possible locations to satisfy each read from, so disk queue of (approximately) half the length, a lower probability for contention, hence faster average read times - especially when the discs got busy.

Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
_at_jloracle



From: oracle-l-bounce_at_freelists.org [oracle-l-bounce_at_freelists.org] on behalf of Nabil Jamaleddin [nmjamaleddin_at_multiservice.com] Sent: 28 December 2015 15:16
To: jt2354_at_gmail.com; mark.powell2_at_hpe.com; 'Oracle L' Cc: nmjamaleddin_at_multiservice.com
Subject: RE: Single TS vs Multiple

“Going to build a 12 disk RAID10 to support all the dat, so 6 mirrored groups, then striped. Two separate disks to handle the multiplexed redo.” “Major OLTP applccation. Tons of reads, fair amount of small writes”

I would personally not care about how many tablespaces at this point b/c performance wise it’s not going to make a big difference if any at all about the number of tablespaces. I would first nail down why you are wanting to use raid 10.

I would be willing to bet raid 5 will give you better overall performance if your database is mostly read. Raid 5 is very fast for reads.

I would do something like this:

5 disks raid 5
5 disks raid 5
2 disks raid 10 (for system, temp, users, undo)

Or could do

6 disks raid5
6 disks raid 10 (for system, temp, users, undo)

Or could do

6 disks raid5
4 disks raid 10 (for system, temp, users, undo) 2 disks raid 10 (for archive logs)

Or how ever you like.

Build your system a few different ways, test performance and then go with the set up that gives you the best performance.

Once you get your raid figured out then ask about tablespaces. If your raid is one virtual disk, then put all objects into one tablespace. Unless you want a read only tablespace and a read/write tablespace, or something like that.

How is that for a can of worms : - )

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of John Thomas Sent: Tuesday, December 22, 2015 2:43 PM To: mark.powell2_at_hpe.com; Oracle L
Subject: Re: Single TS vs Multiple

RMAN can use SECTION SIZE to divide up backup and restore of bigfile tablespaces. You do have to set that against your hardware's ability to parallel restore your large single datafile compared to restoring a single small datafile, but how often do you realistically expect to restore a single datafile?

You could compare that to the effort required to monitor all datafiles to ensure free space for extension and make your judgement.

Unless you have plenty of downtime for a complete restructure, I don't think I would go as far as moving all objects into a single bigfile TS, but you might consider doing that for your active objects, the ones likely to extend.

One more thing though. You have a 40GB Data TS dating back to 2000? On Windows with a 4K blocksize the datafile size limit is 16GB, or 32GB for 8K blocksize. Back around 2000 there was a 2GB limit on most 32 bit Linux/Unix platforms I think.

Regards,

John

On Tue, 22 Dec 2015 at 18:16 Powell, Mark <mark.powell2_at_hpe.com<mailto:mark.powell2_at_hpe.com>> wrote: If you have no driving need to make a change then maybe you should just leave the database as it is.

Otherwise I will complicate your decision by adding another option.

If it is highly unlikely you would ever need to set just one application’s data back to a point in time different that the rest of the database then why not move all the objects based on size into one set of small and large table and index tablespaces using locally managed tablespaces using uniform extents and ASSM?

From: oracle-l-bounce_at_freelists.org<mailto:oracle-l-bounce_at_freelists.org> [mailto:oracle-l-bounce_at_freelists.org<mailto:oracle-l-bounce_at_freelists.org>] On Behalf Of Storey, Robert (DCSO) Sent: Tuesday, December 22, 2015 1:06 PM To: Andrew Kerber
Cc: Oracle L
Subject: RE: Single TS vs Multiple

I don’t have the worry about accidental deletes. They can only delete from within the app and I’m the only one with read/write sql level access. No ASM here, don’t really see the benefit of it. I suspect that if I have to do a TISPIR recovery, it will be to return the entire database to a point. Yep, there is a definite benefit to index/data separation with regards to corruption.

I have no driving need….just curiosity.

From: Andrew Kerber [mailto:andrew.kerber_at_gmail.com] Sent: Tuesday, December 22, 2015 11:50 AM To: Storey, Robert (DCSO)
Cc: Oracle L
Subject: Re: Single TS vs Multiple

Boy, now you opened a can of worms. :) My personal preference is to have index and data tablespaces separate, and that is primarily for logical separation. Also,on occasion when I hit corruption, I have managed to avoid downtime if the corruption was on an index tablespace rather than a data tablespace,

There are backup and recovery advantages to keeping your tablespaces a manageable size (manageable being a flexible definition). if you have users that might accidentally delete data, the ability to do TSPITR is important, and the size of the tablespace will drive the time to do your point in time restore of the tablespace. Bigfile tablespaces are mostly used with ASM, if you are going with file system, the file size will probably be limited by your operating system. Also, if using bigfile tablespaces it is important to be able to multi-thread your backup, or the time involved will be prohibitive.

There are many other concerns on this subject, you could probably write a book. The answer is that it depends on your requirements.

On Tue, Dec 22, 2015 at 11:03 AM, Storey, Robert (DCSO) <RStorey_at_dcso.nashville.org<mailto:RStorey_at_dcso.nashville.org>> wrote: I’m working on a redesign of the structure for my database, which has been active since 2000. I have three main users/schemas that hold the data for the app (the only app running in the database).

User A has 341 tables spread across 9 tablespaces. The tablespaces have basically grouped like objects that support a specific area of the app. Also, each “data” tablespace has a corresponding “index” tablespace. So, 18 tablespaces to support those 341 objects

User B has two tablespaces. One TS has 44 objects, the other has 1 (an audtiting table and largest table in system). Each TS has it’s own index tablespace.

User C owns all the PL/SQL. No physical objects.

User B is my schema that has about a 100 objects. One data TS and one index TS.

Then the assorted system TS, temp, undo, tec.

All tablespaces are supported by a single datafile. Again, same design since 2000 with no driving need to change. My Data TS total just about 40gig, indexes about 27gig

Major OLTP applccation. Tons of reads, fair amount of small writes.

Going to build a 12 disk RAID10 to support all the dat, so 6 mirrored groups, then striped. Two separate disks to handle the multiplexed redo.

So, other than organization purposes, is there any benefit to having multiple TS with their own datafiles, vs, fewer TS, each which might have a couple of datafiles? Or even just one datafile using the BIGFILE option within 11g? I can’t take any one TS offline without taking down the app.

Granted, if a datafile for one of the 13 or so current TS goes bad, I only have to restore that file, instead of one really large one.

Thoughts?

--

Andrew W. Kerber

'If at first you dont succeed, dont take up skydiving.'



This email is intended solely for the use of the addressee and may contain information that is confidential, proprietary, or both. If you receive this email in error please immediately notify the sender and delete the email.

--

http://www.freelists.org/webpage/oracle-l Received on Mon Dec 28 2015 - 23:25:59 CET

Original text of this message