Re: some information about anchor modeling

From: vldm10 <vldm10_at_yahoo.com>
Date: Mon, 13 May 2013 04:58:56 -0700 (PDT)
Message-ID: <d5113fbe-99f1-46ee-b678-a252440c0565_at_googlegroups.com>


Hi Derek,

> -------------
>
> Plagiarism
>
> -------------
>
>
>
> Yes, I understand, from painful experience. So let me start out by saying I am generally on your side, I agree and empathise.
>
>
>
> But I think you need to understand that although there are laws against it, etc, it is sadly very common in the west. Especially in the last ten years, where universities are no longer centres of learning; they are centres of programming humans to be herd animals, and to compete without resolution. I am not saying "deal with it", I am saying, protect yourself.
> -----------

I appreciate your comments. On this occasion I would like to quote the following: “During times of universal deceit, telling the truth becomes a revolutionary act.” --George Orwell

I think that behind each act, there is concrete man, with his name. Therefore in this thread, I concentrate on facts and names.

  1. My paper completely solves the area from database theory, which I call the "history of events". This area is actually a general approach to the theory and practice of database. This general approach to databases, I called general theory of databases or abbreviated "General databases". In my post on February 13, 2013, I wrote with which areas are dealing General databases.

2.
I had published my ideas on my website http://www.dbdesign10.com and in this user group in September 2005. Here is a link to the comp.databases.theory user group, where I presented my ideas in 2005: http://groups.google.com/group/comp.databases.theory/browse_frm/thread/c79f846beb00cc56# (there are also many other links where the ideas were clearly presented)

Anchor Modeling was published in November 2009 and fixed version was published in October 2010. Everybody can see these facts on the internet. The internet is the global auditorium. I mean I have global witnesses.

3.
On 12 April, 2013, in this thread I wrote about some general conditions which must be satisfied. It is about identification, simple key and algorithm about storing and recalling different kinds of objects into a memory. So the following ProcedureA gives general conditions:



ProcedureA
(a)
The main part of solving “historical” and General databases consists of two sub-steps: 1. Constructing an identifier of an entity or relationship. 2. Connecting all changes of states of one entity (or relationship) to the identifier of this entity (or relationship).

I had published this idea on my website http://www.dbdesign10.com and in this user group in 2005 (see section 1. and 2.)

(b)
“We determine the Conceptual Model so that every entity and every relationship has only one attribute, all of whose values are distinct. So this attribute doesn’t have two of the same values. We will call this attribute the Identifier of the state of an entity or relationship. We will denote this attribute by the symbol Ack. All other attributes can have values which are the same for some different members of an entity set or a relationship set. Besides Ack, every entity has an attribute which is the Identifier of the entity or can provide identification of the entity. This Identifier has one value for all the states of one entity or relationship.” (See section 1.1)



Anchor modeling uses the schema which is given in part (a); it uses both of the following sub-steps: 1. Constructing an identifier of an entity.(This is about the construction of the simple key) 2. Connecting all changes of one entity to the identifier of this entity.

This is plagiarism.

The identifier is a more general solution than a surrogate key. The idea of the identifier allows presenting key as simple key instead of a complex key.

ProcedureA is important also for others fields, for example for philosophy, logic and semantics. People have always held that a name denotes a certain entity, although this entity has been changed many times. But the following problem has always existed: How an entity which has changed to another entity is, in fact, the same entity. This problem is solved in my paper. An anchor surrogate key and a surrogate key without ProcedureA does not help at all. In my paper I gave the corresponding procedures, constructions and semantics for solving this problem. Note that I introduced concepts that are related to these fields and procedures.

ProcedureA was caused a lot of problems. On conceptual level the decomposition of the data structures has not been solved. How to explain the binary concept, which consists of the identifier of the entity and one attribute? I have introduced the intrinsic, extrinsic and universal attribute. This has led to the expansion of Leibniz Law of identity. I am writing this, because I want to show a complexity of the problems that solves ProcedureA.

ProcedureA has a general character. It solves the following:

i) The surrogate key. (See sentence “ Besides Ack, every entity has an attribute which is the Identifier of the entity or can provide identification of the entity.”, in ProcedureA, case (b)) To be precise, this is not 100% the same as a surrogate key. The surrogate key is the technical solution, it is index, it is not a theory. The mentioned part of my definition of the key is more general then the surrogate key. What's the difference between this part of my definition and the surrogate key? My key is always "visible" and it always uses ProcedureA which determines the general conditions.
Another important difference between my identifiers and surrogates is that my identifier always belongs to an object. It belongs to a real entity or to an abstract object or it belongs to both the real and the abstract object. (ii) ProcedureA enables work with abstract objects. My main structure is the state of the entity or relationship. The key of this structure is the identifier of the state of the entity or relationship. ProcedureA is the basic procedure for the construction of History. As I have already said, the state of an entity was defined as the totally knowledge about the entity (or relationship). Total knowledge means that knowledge from more than one man, is also included in total knowledge. It possible that one man says that the color of the car is blue and another man claims that the color of the car is dark-green. In this case, my solution can maintain History.

The identifier of a state of an entity can identify the entity. A surrogate key can also identify an entity. However, the identifier of a state of an entity is not the key of the entity. An entity and the state of the entity are two very different things. I wrote in more details about the algorithm that is related to memory management, in my post from February 25, 2013, at this thread. My state structure has both of identifiers, the identifier of the entity and the identifier of the state of the entity. The state is not fully abstract object, it is the state of the real entity. So the state is a combination of the real object and the abstract object. This situation is resolved by using "Combination" of appropriate identifiers. In fact this "combination" enables storing and recalling of complex objects into / from memory.

So this is the rule; the combination of identifiers solves the "Store" and "Recall" of complex objects into / from memory.

I also wrote about relationship between concept and identification, see the semantic procedure (3.3.3) in my paper “Database design and data model founded on knowledge constructs".       

This semantic procedure shows that so-called Russell's paradox is not correct (see my thread “ Does the phrase “Russell’s paradox” should be replaced with another phrase?" posted on this user group ). More important it shows that identification is another mind - real world link and that attributes are identifiers. As I wrote above, the identifier in my model is an attribute. (it belongs to a real or abstract object). So the identifier from my ProcedureA is part of complex construction (procedure). It is not matter of system because it also has semantic nature (it is mind – real link world). In contrast to my solution, a solution that uses the surrogate key has no semantic nature, even more, the surrogate key is invisible to users. (iii) There are mathematical theories that are partially dealing with history. It is for example Modal Logic, which deals with "Possible World". There is a The situation theory, etc. However, these theories have very general nature and do use the undefined terms such as "World". I have already mentioned that Microsoft in its EDM model introduces history. Another theory called Abstract State Machines now also introduces History. (See http://www.w3.org/TR/scxml/ look for History). It is also clear that we now need adequate mathematical theory for "History of events".

I write about this here, so that one can understand the scale and importance of this plagiarism.

Note that in working with the History, the surrogate key is not important. ProcedureA is essential.

4. Anchor Modeling uses other essential elements from my work. If someone link these elements together, then he can get a strong theory of history. These are the following elements: (i) Decomposition into the atomic structure

The decomposition into atomic structures in Anchor Modeling was done at the conceptual level without a proof. They just put some “atomic structures” into their paper. Anchor Modeling does a transition of these “Atomic structures” from Conceptual model into RM without a proof or explanation. The same “technique” was applied in RM/T.        

Then they proved that the corresponding relvars are in 6nf. The subtitle of this paper is as follow: “An Agile Modeling Technique using the Sixth Normal Form for Structurally and Temporally Evolving Data”. The sixth normal form is just a list of desires about the decomposition into atomic structures. It does not provide a solution, procedure or algorithm that allows the decomposition of a relvar into 6nf. I explained that 6nf does not work properly in the following example: if we have a relvar with mutually independent attributes then 6nf has no sense.  

I want to say clearly that a lot of people spent their time trying to get the atomic structures. A lot of people also try to construct mapping between data models. In RM/T and AM this was done without proof and published as scientific paper. One important step in the construction of atomic structures and mapping between data models is History.  

In the paper Anchor Modeling in reference [19], all five authors claim that they use the surrogate key. ("H (C, D, T) contains, as seen above an anchor surrogate key”, see page 2). Even more they use the surrogate key in SQL!

By definition the surrogate key is not visible and never displayed. Everybody can see the definition of the surrogate key in RM / T, or Wikipedia, etc. In my opinion this is very serious. The Springer and Data & Knowledge Engineering are well known referential journals used by many scientists. These journals must not allow the use of erroneous construction. Especially as E. Codd used surrogate for decomposition into binary structures, which is not proven in the RM / T. Now, we can set the question: On which the Anchor Modeling was founded? Obviously Anchor Modeling is based on nothing.

Note that the author of Anchor Modeling first used the term "surrogate key", and then "identities" and finally "identifier", which are very different things. It is obvious that these fundamental concepts are not clear to the authors of Anchor Modeling and to editors of the paper.

However if you think that I am wrong here then let me know.

This paper, in which all five authors claimed that Anchor's values are the surogate keys, was disappeared from the list of references. I started to write about reference [19] (see my thread "The original version "). The authors of Authors Modeling, returned the paper to the list of references, but in their other paper.

Please note that the decomposition into the binary structures is proved in my papers.       

(ii) In my model, I took that entities have different sets of data. These are: the knowledge about the entity, knowledge about the attributes and knowledge about the data. In my model, it is possible implement various additional knowledge. Anchor Modeling also uses different sets of data, but fixed and limited set of data. Note that my data model is more general than Anchor Modeling and that is not based on undefined terms, such as "metadata". Note that Anchor Modeling did not present work with "metadata" at all. In fact Anchor Modeling can resolve only a very simple "metadata". You can take my example 2.5, from my website http://www.dbdesign10.com and you can notice that there is no structure from Anchor Modeling which can solve History for this example.

(iii) Date when some information is created and the date when this information ceased to exist. Anchor Modeling uses also two dates. The authors of Anchor Modeling pay attention to “bitemporal” data. In fact they didn’t notice that my model enables n-temporal data.

For example for one attribute you can have four StratDate: StartDate1 when an     
attribute get value in the real world; StartDate2 when the value for the attribute was entered  in database; StartDate3 when the value of the attribute was entered in Warehouse; StartDate4 when the attribute was transferred to xml. Note that I do not use Data warehouse because I have only database, but Anchor       
Modeling uses Data warehouse, they have the section which is devoted to Data warehouse. I mean in theory it should be n-temporal data instead bitemporal.     

Note that in Anchor Modeling the temporal data and “metadata” are outside the scope of predicate logic, what is a kind of a disaster for RM. In contrast to Anchor Modeling my model enables the formalization of tensed propositions.

Vladimir Odrljin Received on Mon May 13 2013 - 13:58:56 CEST

Original text of this message