Many things in the current IT world are based around hard facts, solid experience and studied techniques. Unfortunately this tends not to be the case when it comes to application developers making database decisions. This is not a criticism of application developers per se, their expertise lies in the app technology, but more a problem with development process and a misunderstanding of the role of the database as the underpinnings of data oriented (as opposed to object oriented) application architecture.The general process of modern agile application development proceeds along fairly set lines of iterative feature based and hopefully test driven development. This approach of getting something working with a suite of tests around it which enable rapid refactoring and rapid development run counter to most Big Design Up Front (BDUF) methodologies, most notably the much maligned waterfall model and the general approach taken to most database driven methods. To truly be an agile database developer in this brave new development world implies a level of clairvoyance beyond most of us, and requires an understanding of future application direction and projected data growth which is beyond that which can be expected of application developers and their product managers. To ignore this difference in requirements of the agile developer and the data modeller/DBA will invariably lead to scalability and performance issues as the project moves forward through its multiple iterations.
We are not saying here that the focus and philosophy of the agile development team and that of the database designer/administrator are incompatible, more that there is a difference in the needs and aims of the two groups and that this difference needs to be recognised as such. This is not always the case in the necessary drive to shift development paradigms. The larger the project, the more apparent this becomes. Changing code is not the same as changing data. A database at the core of a complex multi tier application will usually be supporting many different access paths, from the OLTP requirements of a running user driven application to the reporting requirements of management and maintenance as well as a suite of custom administration interfaces, data feeds in and out as well as the requirements for failover and disaster recovery, and refactoring, recoding and changing requirements is not as simple as that of single parts of the overall codebase.
While there are database methodologies to support agile development, generally the current processes have database design/deployment/administration as worrying separate, and the realms of all the pieces of getting a product from the back of a restaurant napkin to global server deployment need to incorporate the whole iterative development process. We should not be in the position of giving the DBAs the code to deploy when the iterative cycle ends. This is simply unacceptable from an architecture and process point of view. What we have here is a series of failures in using the best people for the job.
As mentioned above, application developers are not generally the best people to be structuring and modelling the data. Data structures, no matter how well abstracted, are fundamentally tied to the underlying technology. Sensible design and architectural decisions can only happen if the data specialists are incorporated into the agile development teams themselves. We can, and need, separate database administration teams, but also, these teams must be part of, or have significant input into the development process, else all roads lead to project misery, to poor use of software/hardware and the continuation of the ignorance loop, as mistakes in data structure and design are rarely fed back up to the original designers. Adding features to code and refactoring as we go works for code, it rarely works in practice for data. It is not so much the new features that is the problem, it the short-termism which the process unconsciously encourages which lies at the heart of the problem. Large data volumes require a degree of foward planning that is often lacking from our short Sprint focused design decisions.
Ignoring data modelling disasters, most projects start off with a fairly well understood and structurally sensible data structure (bear with me here) for the first few iterations of the system. These first set of requirements generally have the data model supporting them happily. The issues only usually really arise as the codebase and feature list grows, along with the growing datasets. Data structures that support smaller volumes of data do not necessarily scale linearly and a lack of understanding of the changing nature of the system will cause problems for the future of the application due to information not flowing back from the DBAs about the current state of the system, as well as the new demands being placed on it from the continuous feature development of the agile processes. Changing requirements are all part of the agile world, and are part of the power of this type of development, and additional features pose less of a problem than feature modifications, but the focus of short iterative steps can easily lead to a loss of focus on the greater need of the ability of the database and the datamodel itself to support the changing application.
In conclusion, for our agile data development processes to succeed, we need to use the skills where they are best suited. Database abstraction layers need to be tested by the database specialists, not in isolation, but within the application context itself, and any fundamental design decisions should be made by the people who understand the systems involved, at all levels.