Re: Suggestion on compression

From: Jonathan Lewis <jlewisoracle_at_gmail.com>
Date: Tue, 10 Jan 2023 10:31:38 +0000
Message-ID: <CAGtsp8=+JAbY5L=EaESHJ6=Qs9Bq87e8-=Fx5wTeL=2Tf6WfQw_at_mail.gmail.com>



Do you have a decent-sized test system? It doesn't matter what the manuals say about how things work (and how well things work), you need to test for YOUR specific setup because you may have some extreme case or boundary condition that causes the generic implementation to break.

Having said that (and using 45 to give you a safety margin over the 30 days you say need need as "current".

  1. Set MOST of your indexes to "indexing partial", then for (table) partitions older that 45 days set indexing off.
  2. I wouldn't set index compression to automatic, generall it's a declaration that you don't know your data and if you're worried about the impact of this specific table then you should know the data very well. I would set specific indexes to fixed size compression.
  3. Why "query high" when you've implied that you hardly ever read data older than 30 days, why not archive high? You may want to play a little safer and say (e.g. archive high over 180 days, query high from 45 to 180 days). (Query high might be a little CPU intensive, so you might want to use query low rather than query high, but you'll have to do the cost/benefit analysis for your system.)
  4. At the point you move from non-compressed to compressed you could consider include an "attribute clustering" clause - this might improve the performance of some queries, and might improve the compression slightly.
  5. Monitor dba_tab_modifications for the table (and its partitions) so that you can see how many changes are reported for compressed partitions - each change is likely to lead to a chained row, so you need to come up with an algorithm for choosing to rebuild any partition that has had "too many" changes.

Regards
Jonathan Lewis

On Wed, 4 Jan 2023 at 07:01, yudhi s <learnerdatabase99_at_gmail.com> wrote:

> Hello Experts, Happy new year to all.
>
> Considering Oracle database version 19C(19.15 to be specific and its
> Exadata-X8 machine). The database having majority of the transaction tables
> are daily range partitioned on a business date column holding truncated
> date values(without any time component). These tables are holding ~1 year+
> worth of data i.e. ~365+ partitions.
>
> The DMLS/write only happens on the live partitions or say today's day
> data(Say Nth partition).
>
> The querying or reading happens mainly last ~5days data or latest N-5
> partitions. However during monthend process(which runs once in a month) ,
> it reads/scans a month worth of data i.e. all the partitions > sysdate-30
> or latest N-30 partitions. Rest of the data/partitions are read/written in
> adhoc basis with minimal frequency.
>
> Now, we are seeing a space crunch in the box and also considering future
> growth we have been asked to compress all the possible table/indexes
> partitions. My question is , in the above scenario, will it be a good
> approach..
>
> 1)To just HCC compress all the historical table partitions i.e. all the
> partitions with partition position <N-30th using 'compress for query high'
> option.
> 2)And for all <N-30th local index partitions with 'compress advanced low'
> option. As "ADVANCE" index compression will let oracle choose the best
> index prefix key in local partition level and leaf block level
> automatically rather than us to worry about that.
>
> Or any other strategy would be best in the above situation for these
> transaction tables?
>
> As I understand the writing will not be a concern as we wont be touching
> the live partitions, however in case the reading(i.e. index scan, cell
> smart scan etc.) will happen on the compressed table/index data , it can
> change the current performance figures. Is my above understanding correct
> here?
>
> Regards
> Yudhi
>

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Jan 10 2023 - 11:31:38 CET

Original text of this message