Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Max Size Datafile in 10g

Re: Max Size Datafile in 10g

From: Noons <wizofoz2k_at_yahoo.com.au>
Date: 15 Sep 2004 02:47:16 -0700
Message-ID: <73e20c6c.0409150147.33d5e11e@posting.google.com>


Mark Townsend <markbtownsend_at_comcast.net> wrote in message news:<6SP1d.51449$D%.11110_at_attbi_s51>...
>
> Nope - we will see single figure petabyte databases very soon (they are
> being built as we speak), and within 10 years we will see 100+ petabyte
> databases and some early exabyte environments. And it's not just
> marketing - Jim Gray has an interesting presentation on this - see
> http://www.research.microsoft.com/~Gray/talks/Gray%20IIST%20Personal%20Petabyte%20Enterprise%20Exabyte.ppt
>

Very interesting. Thanks for an excellent link. In another 10 years I reckon all we do in terms of storage will be completely upside down. I had a similar conversation with Barry Mathews of Oracle Australia when he introduced the 10g stuff. We both saw it going the way of no need to worry about disk configuration or space usage in another two versions of Oracle. As for access speed, partitioning will be the norm. And the partitioning algorithm will be defined in XML and be dynamic. Physically, there will probably be a layer of hash partitioning and that's it. Then on top of that a layer of XML-to-hash indexing (for lack of a better term) and then just an XML dictionary to link them all (one ring rules them all?<G>). No other way of making sense of so much data and indexing it in such a way that it will be usable.

A parallel can be drawn with traditional libraries. Most store many more books than anyone can read in their lifetimes (Access bandwidth). The speed of reading is the limiting factor. Unless you have a librarian that knows what he's doing and can catalog and index all that jazz into something that will send you directly to the info you wanted. Or narrow it down to just what you want anyway.

The same happens with all this exabyte stuff. There is no way with current access bandwidth that anyone can search it intelligently. Even the indexing has to go in before the data goes in: you simply can't afford to build indexes after the fact. Too many resources.

That opens up a tremendous new job category: data librarian. A person who can a-priori define how to create a catalog structure that will make your data usable. And who can make it happen, when you want that video bit about the event last year in which you thought it would be interesting to promote your product this year. And so on.

Ain't storage technology just riveting? Received on Wed Sep 15 2004 - 04:47:16 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US