Re: A new proof of the superiority of set oriented approaches: numerical/time serie linear interpolation

From: Cimode <cimode_at_hotmail.com>
Date: 5 May 2007 11:21:35 -0700
Message-ID: <1178389295.897789.225590_at_l77g2000hsb.googlegroups.com>


On 5 mai, 19:30, "David Cressey" <cresse..._at_verizon.net> wrote:
> "Cimode" <cim..._at_hotmail.com> wrote in message
> > Thank you for your friendly warning but do not worry about me. To
> > me, it's a matter of expectations and feedback quality. I have
> > already came to the conclusion that a few good points made by some
> > people here may worth the hassle of accepting noise, off topic or
> > uninformed (unaware?) feedback.
>
> > In my quest of refining and characterizing a computing model that
> > would correctly represent relations and relation operations, such
> > quality input is quite refreshing and allows me to double check that I
> > am not following a dead end conceptual lead.
>
> fine. Back to the main point.
Thank you very much for understanding.

> First, inferences and data are not the same thing. An interpolation, or a
> point derived from regression, or whatever, is an inference, not data.
Absolutely. I should have been more careful in my phrasing.

> In your original exposition of the thread, you presented it as if the
> inferences could be returned to the user in place of data, where no actual
> data is available. My claim is that you do this at your peril. It has to
> be possible for the user to find out whether the supposed data provided by
> the DBMS is data that was provided to it, or whether it's an inference
> based on available data and a model for how the data works.
I agree undred percent. On a purely logical standpoint the risk is indeed very high. The reason I flirt with such dangerousity is identify some key issues that I may integrate for further refining for building a computing model that supports a systematic treatment of missing information *without* projecting logical decomposition (described by Darwen). I have reasons to believe that projecting such method on a physical standpoint is a dead end because it would break the primal principle of separation of the two layers. So I am reviewing a few computing methods that would allow systematization of treatment by the dbms. Among these methods interpolation may be applied in the specific context of numerical and datetime implementation of mathematical series. My hope is that if I can avoid enough traps, that may prove helpful for the refining of the computing model.

> Bob Badour already gave a function (something like (x-1) / (x-1) ) whose
> value is predictable everywhere except where x = 1. Inference by
> interpolation is risky with this function. Allow me to add another example:
> suppose you have some (necessarily incomplete) data about the location of
> the planets as observed at known places and times (like Tycho Brahe's
> data). Suppose you have a model of how the planets move (like Ptolemy's or
> Copernicus' or Kepler's, or Newton's or Einstein's) and suppose to
> interpolate (or extrapolate) to provide answers at other points in time. I
> believe this is precisely what the software that comes with some home
> telescopes does.

I do not believe classical functions are a an appropriate abstract mathematical tool for treating the problems linked with interpolation of numerical series. I have found mathematical sequences and series a much confortable tool to work with.

http://fr.wikipedia.org/wiki/Suite_(math%C3%A9matiques_%C3%A9l%C3%A9mentaires) http://en.wikipedia.org/wiki/Sequence

[Snipped]
> Aside: I wish my French were half as good as your English. I remember a
> maxim from a high school French book, that I'll try to reproduce from
> memory, without too many spelling errors:
I must confess: I cheated. I have spent 6 years in the US. ;)

> "Les conseils de la vieillesse sont comme le soleil d'hiver. Ils eclairent
> sans echauffer."
>
> For some reason, I think of this maxim often when I read this newsgroup.
Received on Sat May 05 2007 - 20:21:35 CEST

Original text of this message