Re: How many is too many

From: Kevin Closson <ora_kclosson_at_yahoo.com>
Date: Fri, 12 Aug 2011 10:51:32 -0700 (PDT)
Message-ID: <1313171492.70468.YahooMailNeo_at_web161713.mail.bf1.yahoo.com>


Yes, no doubt. If you have to dart off for a row it blows.



________________________________
From: Greg Rahn <greg_at_structureddata.org>
To: Kevin Closson <ora_kclosson_at_yahoo.com>
Cc: "oracle-l_at_freelists.org" <oracle-l@freelists.org>
Sent: Friday, August 12, 2011 10:29 AM
Subject: Re: How many is too many


Using PX with an index FFS could be a viable solution for a "skinny" table scan.  The big problem that I've seen is when its not a multi-block read FFS, it's index -> table by rowid, which results in single block reads and that is very inefficient for a large amount of rows/data via direct path reads from disk.

Sure - I'm still buying beers. 


On Fri, Aug 12, 2011 at 9:48 AM, Kevin Closson <ora_kclosson_at_yahoo.com> wrote:

What about parallel execution + index scan? If a query only needs 12 of 100 columns then it's sort of columnar in that case.
>
>
>
>I do, of course, agree on the damage to load times but it is conceivable that drop, load, index could be possible particularly when one knows partitioning.
>
>
>
>We shouldn't forget that Oracle had the market share of DW before Exadata. Someone, somewhere worked out how to get indexes to work for DW. I agree that most people usually get it wrong though and I agree that working out I/O is a smart thing to do.  Most of us still Party Like It's 1999 thinking that high-performance, high-bandwidth I/O is some holy grail never to be achieved. 
>
>
>
>A single 3.5" 15K RPM SAS/FC  drive can produce 200+ MB/s streaming. Start there, add drives and scale the plumbing. It's not  rocket science. 
>
>
>P.S., Are you still buying the beer next time Greg? :-)
>
>
>
>
>________________________________
>From: Greg Rahn <greg_at_structureddata.org>
>
>To: oracle-l_at_freelists.org
>Sent: Thursday, August 11, 2011 10:47 PM
>Subject: Re: How many is too many
>
>
>
>For OLTP, adding indexes is a known quantity (in general).  There is a known and finite number of queries so adding indexes is about limiting the number of blocks visited.
>
>
>For DW, adding indexes in this manner (12 indexes on a table, etc.), is the beginning of the end.  Data loads suffer and since query workloads are frequently unknown (ad-hoc), DBAs often misapply OLTP techniques - adding indexes for each problem query they see.  Seems people forget why databases have indexes -- as a manner to efficiently access a small number of rows.  When indexes are used as a means to reduce IO, it is then being mis-applied because the platform is not appropriately sized and/or correct DW technologies are not being applied (e.g. parallel execution + table scans) for a large number of rows problem.  
>
>
>In any case, one has to start asking what good reasoning there is for 12 indexes.  That just shouts bad design and/or inappropriate tuning methodologies. Start asking why 5 times.
>
>
>On Thu, Aug 11, 2011 at 8:14 PM, Ram Raman <veeeraman_at_gmail.com> wrote:
>
> 
>>Rule of thumb!  Thanks for the response Greg. Does this apply to Data warehouse or OLTP applications or both?
>> 
>>On Thu, Aug 11, 2011 at 7:03 PM, Greg Rahn <greg_at_structureddata.org> wrote:
>>
>>Sounds like the "index death spiral" to me.   
>>>My simple sanity check:  when the total number of columns indexed is greater than the number of columns in the table, there is likely a design or tuning problem.  Often times, even before then... 
> 
>-- Regards,
>Greg Rahn
>http://structureddata.org
>
>
>


-- 
Regards,
Greg Rahn
http://structureddata.org
--
http://www.freelists.org/webpage/oracle-l
Received on Fri Aug 12 2011 - 12:51:32 CDT

Original text of this message