Re: Demo of a New Advanced Utility to Maximize Oracle SQL Execution Efficiency

From: Mladen Gogala <>
Date: Wed, 5 Sep 2012 10:01:43 +0000 (UTC)
Message-ID: <>

On Tue, 04 Sep 2012 18:31:47 -0700, optiwarehouze wrote:

> Please view our youtube presentation that explain the architecture and
> revolutionary approach taken by this product with an actual
> demonstration against an Oracle Database.
> Please contact us at the OptiWarehouze Corporation through linkedin or
> by e*mail

First of all, this group is not intended for advertising. Please, don't do it any more.

Second, I took time to look at your demo and have several remarks and questions:

  1. You are modifying the SYS schema which can render my database unsupportable. Do you have a special relationship with Oracle Corp? If not, how can I be sure that Oracle Corp. will support my databases if I install your tool. If I install your tool and Oracle tells my boss that our production database is no longer supportable, I will be out of my job in New York minute.
  2. You are using an undocumented feature which allows very large histograms. DBMS_STATS supports up to 254 endpoints, AFAIK. That feature will probably be documented in Oracle RDBMS 12c and usable by the DBMS_STATS in that version. Will your software be compatible with 12c and will the large histograms be available in 12c? If so, I would waste company's money on something they will get for free in few months. I'm not really sure that my CIO would like that very much.
  3. Oracle optimizer is changing constantly. How would your product influence the optimizer when I apply quarterly PSU patchsets? Your demo was on, What happens when the database is upgraded to Choice of an obsolete version for a demo is a bit strange, at least in my eyes. What would be the effects of the doctored stats to my upgraded database? Would Oracle Support help me with fixing any bugs which may come into being because of the new method for collecting stats?
  4. In your demonstration, you used estimate percent of 100%. That is out of the question with some of my large tables. What is the quality of the logarithmic histograms is a small percentage is used, like 2%. On a 350GB table, even that is a lot. Does your package recognize DBMS_STATS.AUTO_SAMPLE_SIZE value, which is a must if I want to use new and faster algorithm for cardinality estimate?
  5. I have tidied up statistics collection in my databases. Stats runs nightly, with database preferences set for all tables except 43 of them which have their own table level preferences. Can I use GATHER_STATS_JOB auto task or I should create a new job? Will your package read table and schema level preferences set by DBMS_STATS?
  6. With rigorous design rules, peer review and DBA sign off, I get very few "mad bomber" SQL. Ones that I do get, usually generated by Hibernate, I can fix by hinting and using baselines and DBMS_SPM. What would you suggest as a business case for your package in my environment?
  7. How do your stats play with the cardinality feedback and dynamic sampling?
  8. What was the reason for the demo on In my experience, things usually happen for a reason and I would like to know this one.
Mladen Gogala
Received on Wed Sep 05 2012 - 05:01:43 CDT

Original text of this message