System statistics

From: Tony Hasler <tony_at_aberdour.demon.co.uk>
Date: Sat, 17 Aug 2013 17:31:27 +0100
Message-ID: <002301ce9b67$39225090$ab66f1b0$_at_aberdour.demon.co.uk>



I am trying to get solicit some opinions on the apparently divisive topic of system statistics.  

There seems to be two areas of dispute. The first mildly contentious area involves whether gathering system statistics, workload or no-workload, is likely to generate reasonable results. Let us not discuss that. If the figures aren't right you can set them. For me, the more interesting question is: why bother at all?  

We know from experience that when the CBO makes a cardinality error of a factor of 2 or 3 then no great harm is likely to befall us; normally such errors either have no effect on the selected plan or the change in plan brought on by the cardinality error isn't particularly disastrous. Disastrous execution plans are typically only brought about by cardinality errors of a factor of 10 or, more usually, much more.  

Given our tolerance for cardinality errors why are we so concerned about accurately quantifying the size and performance of multi-block reads? Why not just delete system statistics and set DB_FILE_MULTIBLOCK_READ_COUNT to 0 (implying the maximum possible I/O size) and be done with it? The CBO might be out by a factor of 2 or 3 in costing a full table scan but then cardinality errors are likely to have a similar effect on the costing of indexed access.  

--Tony Hasler

--

http://www.freelists.org/webpage/oracle-l Received on Sat Aug 17 2013 - 18:31:27 CEST

Original text of this message