Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> comp.databases.theory -> Re: Extending my question. Was: The relational model and relational algebra - why did SQL become the industry standard?

Re: Extending my question. Was: The relational model and relational algebra - why did SQL become the industry standard?

From: Paul <pbrazier_at_cosmos-uk.co.uk>
Date: 28 Feb 2003 09:33:48 -0800
Message-ID: <51d64140.0302280933.4ba19da0@posting.google.com>


"Paul Vernon" <paul.vernon_at_ukk.ibmm.comm> wrote in message news:<b3j115$kdq$1_at_sp15at20.hursley.ibm.com>...
> > Maybe we need an implementation of the integer domain in DBMSs that
> > will cover ALL integers i.e. given enough machine space any integer
> > can be stored. So this would deal with the countably infinite in some
> > logical sense, but we'd still have problems with the polar-cartesian
> > mapping (which requires uncountable infinities).
>
> I'm with Jan. Our database logical model needs to be discrete and finite. I
> think all our (non abstract) types need to be finite and our model just need
> to get over it and deal with it.

Maybe a better word would be "unbounded" rather than "infinite".

Although any relvar can only have a finite number of tuples at any given time, there is no theoretical upper limit; the number of tuples is unbounded.

So by analogy, if it's good enough for relvars to have an unbounded number of tuples, why shouldn't it be OK for domains to be able to have an unbounded number of values? i.e. although we can only ever consider finite subsets of the domain, there are an infinite number of these subsets. The domain is never realised anyway so we are never going to have to deal with the infinite.  

> For queries, I could comprehend delaying the specification of the size of a
> type until the query compile time, at which point the DBMS decides upon a
> maximum amount of storage that the query can consume, and from that calc (plus
> say requested response time) fixes the size of any types in the query as
> needed. In other words I could accept a type whose upper bound was 'storage'
> just as long as each time that type was used, a particular value for that
> bound was fixed. What I do not accept in our database logical model is any
> concrete type whose size is 'infinite'. Such a type is just going to
> RAISE_ERROR (of some kind) when any use of it exceeds the storage capacity of
> the machine anyway, so there is no logical difference between an 'infinite'
> type and a 'storage' type fixed at query time.

I realise that physical implementation considerations come into play; maybe you could hint to the DBMS that your attribute will almost certainly be under a given value. Then for low values the DBMS would use efficient but less flexible storage/computation methods. But if by chance the value did go over the limit it could still struggle on (maybe warning the user that performance is affected). So the errors are only generated in response to physical limitations of the machine, not of the logical model.

Paul. Received on Fri Feb 28 2003 - 11:33:48 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US