XP and data modeling

From: Dawn M. Wolthuis <dwolt_at_tincat-group.com>
Date: Fri, 18 Jun 2004 19:11:01 -0500
Message-ID: <cb00at$n5b$1_at_news.netins.net>



Extreme Programming methodology/ies has a process for determining what data from the problem domain is modeled today and what is left for future phases of any project. The rule of thumb is that only the data required to meet the user requirements for this specific iteration (this set of user stories) be designed and implemented. If we are aware of a requirement that we expect will end up in a future iteration during the development of an application and it will require a change to this data model, then we will address that when the time comes and not design in light of it now.

This has the disadvantage of not thinking past our noses, potentially boxing ourselves in with a narrow initial implementation when we could have anticipated that our house would need a bathroom in the future. But it has the advantage of giving a rule of thumb, at least, for what is designed in and what is not at each stage of the development. There are then best practices for helping with refactoring and agile development, such as working with VIEWS (logical data structures), so that if the underlying base tables change, the refactoring is not too extreme.

I'm attracted to the XP approach because it does provide some pretty clear guidelines at any given point in the project -- you design what you need to design to fit the requirements being implemented -- but it seems like an approach that has at least as much chance of contributing to additional project costs as to reducing costs.

Question List 1: For anyone who has used XP with data modeling, what have you encountered in refactoring the data model through the life of the project and maintenance thereafter? Is your experience that this approach reduces the overall cost of software applications? What type of changes are most likely in subsequent iterations during the project -- additions, such as added attributes or relations, or changes that have more of an impact on the application written to date, such as moving attributes from one relation to another?

Question List 2: Are there other methodologies that make it (at least as) clear what is in and what is out of the data model at any point during the project or thereafter? Other than risk assessment, mitigation strategies, and other standard project fare, are there any industry best practices or good papers written related to designing data for future possibilities? In other words, other than within XP, where is the question even addressed of how to decide what it in and what is out of the data model?

Thanks. --dawn Received on Sat Jun 19 2004 - 02:11:01 CEST

Original text of this message