Re: Process Model
Date: Fri, 19 May 2006 13:39:04 GMT
Message-ID: <YTjbg.1$ei2.0_at_trndny02>
"Kenneth Downs" <knode.wants.this_at_see.sigblock> wrote in message
news:9en0k3-6af.ln1_at_pluto.downsfam.net...
> David Cressey wrote:
>
> >
> > "The underlying strategy of functional decomposition consists of
selecting
> > the processing steps and sub-steps anticipated for a new system.
Analysts
> > use previous experience from similar systems, combined at times with an
> > examination of required outputs. The focus is on what processing is
> > required for the new system. The analyst then specifieas the processing
> > and functional interfaces."
> >
> > As far as I can tell, functional decomposition results in a process
model.
> >
> > Every information system I've dealt with was, at some stage of its
> > development, described by a process model and a data model. Sometimes
> > these
> > models were implicit and non verbalized, but they were models
> > nonetheless.
>
> It seems to me the process model is not strictly necessary for the
> foundation of a system. When a programmer speaks of "process steps" they
> usually start talking about reading data, doing calculations, then writing
> data.
>
> IMHO, if you can define derivations as part of the table design, you
> eliminate the need for most processes. The only exceptions I've run
across
> are certain processes like ERP allocation where each row write affects
> tables in such a way that a loop is apparently required. Also in some
> import/export routines where the desired target format makes it more
> economical to write up some code than to dream up a generalized solution.
>
> But in your classical processes like billing and cash posting, the ability
> to specify derived values can eliminate posting processes entirely.
>
> So, once the process is recast as table definitions, outside process code
> becomes more like UI code, something that can easily cobbled together to
> trigger events in the database, but its not a foundation design element
> anymore.
>
> I should also point out that I've considered making use of table
> specifications to generate server-side processes, so that automatic
> derivations can be delayed and run on-demand. It seemed like a good idea
> for comleteness, but nobody yet has asked me to do it, so it just sits on
> the shelf as an idea.
Oddly enough, I got exposed to this concept about ten years before I got exposed to the RDM. It was referred to as "table driven programming". The tables were coded in assembler, or DATA statements, or whatever, and were memory resident. But "table driven programming" was clearly an antecedent to the ideas you have been developing, IMO.
