Re: deductive databases
Date: Sun, 15 May 2005 22:23:22 -0400
Message-Id: <6tcll2-l9l.ln1_at_pluto.downsfam.net>
mountain man wrote:
> "Kenneth Downs" <knode.wants.this_at_see.sigblock> wrote in message
> news:qbiil2-164.ln1_at_pluto.downsfam.net...
>> mountain man wrote: >> >>> "Kenneth Downs" <knode.wants.this_at_see.sigblock> wrote in message >>> news:tuuel2-uph.ln1_at_pluto.downsfam.net... >>>> mountain man wrote: >>>> >>>>> "Alfredo Novoa" <alfredo_novoa_at_hotmail.com> wrote in message >>>>> news:87t881l00onh3tibjbqvokll34kn2d67s9_at_4ax.com... >>>>> >>>>>> To say that recursion is not useful to solve part explosion problems >>>>>> shows profound ignorance. >>>>> >>>>> >>>>> 1) WTF is this critical inventory explosion problem? >>>> >>>> A widget is made of components. Each component is made of other >>>> components, >>>> and so on and so on. The nesting level is not defined ahead of time >>>> and can go to arbitrary depths. >>> >>> Thanks for the brief. >>> >>> Yes, arbitary but not infinite depths, and once the maximum >>> depth is chartered, the current entire instance of the problem >>> can be flattened - no big deal. Auto-regen flat structure as >>> required. >> >> Two objections from experience. >> >> First, if you set the depth at N, somebody will need N+1. They will need >> it >> on Christmas Eve coinciding with your daughter's wedding.
>
>
> OK. Set the depth at N + X, where N is the maximum depth
> and X is your comfort buffer.
User will require N + X + 1, as you are getting on the plane for your long-awaited vacation.
>
>
>> Second, it is dangerous to expect to pass complete run-outs when >> information >> changes because you lose the ability to keep transactions small. >> Multiply the run-outs by a high user count and you've got a major >> bottleneck.
>
>
> I dont understand what you mean by run-outs.
Flattening.
>
>
>>> Throw the max depth code into an automated exception alert >>> to present on a queue to some workgroup that an instance has >>> arisen where widget_id 123456789 has exceeded max depth. >> >> See first objection above.
>
>
> Throw max depth + N into the alert queue.
> (Where N relfects your index of comfort).
This is an after-the-fact fix, less robust than a system that allows infinite nesting in theory, and is therefore bounded only by physical resources.
>
>
>> Third objection is the dumb retail terminal. Counter help says, "I can't >> sell you that radio you are holding in your hand because the computer >> says >> we don't have any." The limit you set and the exceeding depth will be >> seen >> by the users as an arbitrary and onerous.
>
>
> If the N+X limit was somehow exceeded, the item would be
> automatically sitting on a centralised data integrity exception
> queue.
Tell that to the customer at the counter who wants to buy the radio, "As soon as we clear the data integrity exception queue we'll have you on your way --- hey? Where'd he go? Sir? Do you want the radio?"
>
> It is probably important to mention here that IMO there are
> thousands of possible data integrity exceptions that will enter
> any production database system, no matter how well the data
> structures and constraints have been defined. Therefore an
> automated data integrity exception series is often the first thing
> constructed.
>
Ouch. Include me out.
>
>
>>>> Building a flat list of a complete parts explosion for an item >>>> therefore is >>>> a hassle. >>> >>> If you dont know the depth it is, but you build >>> a tool to determine the maximim depth first. >>> >>> >>> In fact there are literally hundreds of alternative work- >>> arounds to this type of problem without involving >>> any form of esoteric generalised recursion theory. >>> >> >> Agreed, you don't need anything esoteric, the solutions are well-known, >> it's >> just they all contain the moral equivalent of a divide by zero.
>
>
> That's what the theoreticians say,
> but I dont believe them. ;-)
Which theoretician says this? This is worth a subthread.
>
>
>>>>> 2) How many organisations are experiencing this problem? >>>>> >>>> >>>> All manufacturers in the world? >>> >>> >>> Not the ones I've ever had anything to do with. >>> >>> My main clients have been patent and trade mark attorneys >>> and their IP management systems invariably generate these >>> types of problems where relationships involve trees and >>> their branches. EG: >>> >>> Tracking the relationships between patent applications, >>> especially divisional patents, that can have parents, which >>> themselves are a divisional patent of another divisional, etc. >>> sounds like the same form of problem. >> >> Is it possible they are not hitting the system that hard? Not a high >> volume >> of transactions? Sounds like for lawyers it would be no problem to >> explode >> a hierarchy on each change, since they just don't change that often, no?
>
> Correct, the inventory example would have a higher volume of
> transactions than the attorney example, but the principle should
> be applicable to either.
>
> With very high volume of transactions the incremental "flattening"
> of new stock (since last "flattening") may need to be scheduled
> with a greater periodicity, or invoked per each new widget.
>
>
Methinks our different approaches are heavily influenced by our situations on the ground. As a parting argument I would still want to use a more robust solution, because it will work in the light and heavy load, while the one built for the light load can't scale.
-- Kenneth Downs Secure Data Software, Inc. (Ken)nneth_at_(Sec)ure(Dat)a(.com)Received on Mon May 16 2005 - 04:23:22 CEST