Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> c.d.o.server -> Re: Index compression vs. table compression

Re: Index compression vs. table compression

From: Howard J. Rogers <>
Date: Thu, 30 Dec 2004 11:12:59 +1100
Message-ID: <41d3480c$0$1084$>

Rick Denoire wrote:
> "Howard J. Rogers" <> wrote:

>>>>Unless you twiddled things when you were creating your file systems, 
>>>>then that's wrong: it should be 4K (assuming you're using bog-standard 
>>>>ext2, for example). You'll have I/O problems as a result of having an 
>>>>Oracle block size that's bigger than your file system buffer size. 

> Where is the evidence?
>>>This is a question I still have to investigate. Do you have numbers?
>>>Benchmarks? Records of I/O per sec? One has to MEASURE in order to
>>>know - too many myths out there, and this is a complicated matter.
>>Of course. That's why Steve Adams has articles on it, and why I do.

> Seems strange to me that you rely that much other people's opinion.

It seems a little strange to me that someone who clearly knows it all, or thinks they do, posts here for advice in the first place.

I place reliance on others' opinions when I can verify that the results they suggest I should get are in line with the results I actually get. It's called testing.

> I am missing the facts at also. Don't take me wrong,
> Ixora is a respectable source of information. But it is not a church.

No-one was claiming that it was.

>>>would expect that several read requests of contiguous blocks be
>>>gathered into one request by the OS before being actually issued.
>>Let's not expect things, eh?!

> In the absence of enough information, I would expect that, but I am
> not insisting that because of this write coalescing feature Oracle
> performance will be positively affected. I just don't know.

Precisely. But I do, because I've tested it.

>>>Isn't this a classical feature of Unix-like filesystem buffers? (I
>>>won't go into read-ahead details done by the storage).
>>Go read what Steve Adams has to say on the matter.

> Go measure results. Sorry.

I have done so. Go learn what "condescending" means.


> That reminds me the "cache" option for tables, I also never understood
> how that could go well. Nowadays, Oracle discourages its use.

No they don't. They say that it is unnecessary and less effective than the multiple buffer pools feature you seem not quite to grasp properly. CACHE was an attempt to keep large full table scans at the cold end of the LRU list, and thus prevent the warm half from being flushed out. Precisely what the RECYCLE pool does. Only the RECYCLE pool does the job more effectively and more certainly, because it's not trying to 'partition' a single LRU list, but has one all to itself.


> course!", I would say. It is likely to be misused, it could
> practically flush your default cache. I do not believe blindly what is
> against my understanding and recommend every one to do the same.

I have no problems with that as a piece of advice. I have some problems with the precise degree to which you actually understand the matters being talked about. And Oracle is never about "belief", blind or otherwise. Just test and verify.

>>It's also the case that blocks can slip into the cold half of the LRU 
>>list which you'd rather not have flushed out. By directing FTS to their 
>>own portion of the cache, you can give such blocks a bit more time to 
>>get 're-warmed' up to the MRU end of the LRU list.

> I think you don't seem to grasp my point. If you detach one portion of
> the memory from the default cache for a recycle cache, it will be
> missing in the default cache. There won't ever be any blocks trying to
> get re-warmed there.

Precisely (in other words, I think you've rather missed what I actually wrote).

> No cache - no blocks, no re-warming. Do not buy
> *every feature* Oracle sells indiscriminately.

>>A recycle cache can be, in short, extremely useful.

> Did you actually check that?

Rick: when do you stop sounding like a complete pillock? Of course I checked that. I test and test and test routinely. Every day, in fact, when I'm not in the training room. So please spare me the "I don't buy into Oracle marketing" line, because I suspect I actually invented it in the first place.


>>I do have tangible pointers to Steve Adam's website where it's 
>>discussed. It's silly putting out benchmarks on the matter, and neither 
>>Steve nor I do.

> I respect your opinion, but I dislike it. The prove of your hypothesis
> is that it would be silly trying to refute the opposite?!

That is not what I said. The point was that the theory is sound, and there are thus good grounds for caution. But the degree to which caution should be exercised needs to be evaluated on each hardware rig.

> Frankly, 25% is completely meaningless if you consider the enormous
> impact that your theory about blocksize mismatch should have on DB
> operations. Didn't we all learn that the really expensive part of DB
> work consist of its "mechanical" components (e.g. moving heads on
> disks)? And that accessing disks can be orders of magnitude slower
> than the pure electronic access?

I have no idea what you're on about now. I'm talking the exact same bit of spinning ceramic, formatted with exactly the same file system, yet two files internally organised into different Oracle block sizes, and still being able to measure a significant performance degradation.

> OK, I see you are a believer.

Try not to patronise me, OK? This isn't belief. It's measurable, and testable. But you can test it for yourself, which is the only test that is going to make any sense.

> I actually already began doing preliminary tests (which varied *more
> than 25% under consistently same conditions*). I installed a little
> tool called orabm/orastress and ran it testwise. Next weeks we will
> see. I intend to run the benchmark on a Linux box with/without
> directIO, with tablespaces of different Oracle blocksize (4k and 8k).
> Then we will know for sure how sound this theory is.

For *you*.

Whatever. You seem to know it all anyway. Plonk.

HJR Received on Wed Dec 29 2004 - 18:12:59 CST

Original text of this message