Database Continuous Delivery Meets Your Application

[article]
Summary:

Continuous delivery meshes well with agile development: Both facilitate the need to move quicker and deal with ever-changing requirements, delivering the best quality possible but usually with not enough resources. Agility is what is expected from technology companies and IT divisions. So, what does it take to have continuous delivery in your database?

Research done by Perforce Software reveals interesting information about the popularity of continuous delivery (CD). The infographic based on the corporation’s findings from a Q4 2013 survey shows that 65 percent of the responding companies are in the process of using continuous delivery for their projects. But even more interesting is that 46 percent of the organizations are worried that their competitors are using CD to get ahead.

This is exactly why agile development was born: the need to move quicker and deal with ever-changing requirements, delivering the best quality possible but usually with not enough resources. Waiting six months until the next rollout or release is not acceptable in today's market. Waterfall methodology's big release concept doesn’t cut it anymore. Agility is what is expected from technology companies and IT divisions. So, what does it take to have CD?

The Two Basic Truths

Having CD relies on two simple truths. First is automation. Automate every step in building, testing, promoting, deploying, documenting, etc.

The second truth, which is a requirement for automation, is version control. All changes must be documented and stored in one place, available for the automation processes. The automation processes retrieve information only from the vault and no other location. Everything that was checked into the vault can be promoted, provided it passed all the steps.

With this in mind, we have to find a way to control versions of all the application components—which can be native code (C#, C++, Java), database code (schema structure, PL/SQL or T/SQL, application parameters stored in lookup tables), or configuration files.

The Missing Piece in Version Control

There are several types of version control solutions. The most common solution is based on files. All the artifacts are files. They can be simple text or XML, or even an image.

The second type of version control is application-dependent, like with Informatica—all changes being done within Informatica can be versioned within Informatica. Changes that are related but external, such as database procedures, are out of the scope for Informatica version control.

Including the database within the application version control is not new. Until recently the solution that was available was to use the file-based version control tools to manage scripts that alter the database objects. The main challenge within this method is the application is tested using a real database environment and the changes being promoted are taken from the file-based version control repository, so they can be out of sync.

Database development and file-based version control processes

Now, there is a third type of version control that completes the first two: database-enforced change management. It focuses on version control for the database code. This solution enforces version control best practices on database objects and allows a single and enforced process for the database development as it exists for the native code.

Single process for database development and version control

With all code, configuration files, and applications stored in a single place, the automation processes can build and promote them between environments.

Automation for Different Processes

A simplified automation process based on the “build once, deploy many” system should look like the following: Check in changes to version control --> build --> deploy to test --> run unit tests --> deploy to next level --> run more tests --> deploy to user acceptance test --> test --> deploy to production.

Simplified continuous delivery process

There is one “build” step and many “deploy and test” steps—hence the name “build once, deploy many.”

While it works with native code because no one thinks of building binaries on a private PC and copying them to a server, this is not the case when dealing with database deployment. For a start, database deployment is not copying and replacing; it is the transformation from the previous version to the new version while keeping the business data stored in the database. Many times there is a critical fix being made in the database out of process, which causes problems in the “build once, deploy many" approach.

"Build once, deploy many" failed for the database deployments

Database Automation: Build and Deploy on Demand

With build and deploy on demand, the database delta script that upgrades the database from the current version to the next version is generated when needed. Just before executing it in production, the script is saved to be reused, so the production run will be after the script has already been tested.

Build and deploy on demand

Confidence with Automation

Without confidence in automation, no one will ever use it. With no notification of conflicts within the database code between environments, it is difficult for database administrators, developers, or anyone else to rely on a script generated by a simple compare and sync method.

The issue of parallel development, with the development sandbox and branches, was solved many years ago with file-based version control by having notifications on check-in events. When there is a conflict, the developer receives an alert that the code now checked in was changed by someone else after it was checked out. The developer can merge the changes on the local PC and then check in the merged code.

Databases are a different technology because the code is not exclusively stored on the developer's PC. The code exists in every environment, and anyone can modify it. The merge event happens and is resolved when generating the delta script with the baseline approach, the same approach modern file-based version control tools use.

Baseline impact analysis

How the Database Must Support Continuous Delivery

Continuous delivery is important and should be used in any IT organization to support business requirements. CD has evolved from the understanding that IT is part of the business. The business cannot wait a year for the release of new features; the market will be changed, and what is relevant now may not be by then!

In order to support continuous delivery, automation of software is required. There are many pieces and technologies in the application, each requiring a slightly different tool to support the two basic truths: a reliable automation process and version control.

The database version control must enforce documentation of changes using check-ins, regardless of how they perform. The automation part of deploying database changes must utilize the version control repository to protect the target from wrong deployment and alert developers to conflicts when they do happen.

User Comments

1 comment
Damon Jebb's picture

I must be missing something here; the best way to deliver any change, whether code or database structure, must surely be to develop and test the change before applying it with appropriate testing to qa, preprod and prod.  This is the build once, deploy many approach, that you suggest is a cause of problems with databases; I would suggest that the issue with database management is not this process but the failure to achieve the process. 

Your solution, allowing changes to be made in production and then back-ported to lower environments is designed to subvert the process, and results in what is, to my mind, a fundamentally flawed approach to delivery of change.  How does this process ensure that the changes that are developed and tested in the dev/int/qa environments *is* the code that eventually gets to production.  How does it ensure that the changes that are made in production are properly recorded in source control?  Are the delta scripts for the environments built from source control or by comparing the status of the 'source' database with that of the target database?

The problem here is that we are stuck in the mindset that the DBA is allowed to make ad hoc changes to the production system and those changes are manually hand-crafted in the system.  We would never allow a developer to do that to a piece of java code running in the app server, so why on earth is it acceptable to do so in the database?  The process seems to actively encourage 'out of process' changes rather than providing a clean and quick solution that would allow all changes to be 'in process'; it is a clean, repeatable, reliable change process which is used to deliver *all* change that has been proven as the route to reliable, high-quality software delivery for almost all other systems and is surely the thing to aspire to achieve for database systems too.

September 25, 2014 - 5:29am

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.