IntroductionIf you work in the .Net stack, the main way to have versioned database projects where changes can be easily monitored is to use Sql Server Data Tools that are available as part of Visual Studio. For some sad reason, however, these have been left to languish and there are various shortcomings and tooling problems that have not been fixed in 5+ years.
Fortunately, most of these have workarounds, which are not that hard to understand.
Using a combination of hacks and workarounds, we can plug this into our build pipeline and deploy it manually or automatically to our SQL servers.
Getting the toolingGetting the tooling is straight-forward. Run the Visual Studio installer and choose the Data storage and processing workload to install. This gives you the Visual Studio templates, which is enough to get your project started and to be able to deploy directly from Visual Studio to a SQL server. You can also reverse engineer an existing database and deploy updates rather than dropping and recreating a database.
Creating your projectIn Visual Studio you click to create a new project as usual and search for SQL Server Database Project, which should be underneath Other Languages -> SQL Server. Give it a name etc. as you do with any other project and this gives you an empty project that does nothing.
Reverse Engineering an existing databaseYou can skip this step if you are not creating a project from an existing database. Otherwise, right-click the project name and choose Schema Compare. This brings up the comparison dialog and by default, it is pointing from your database to a target. Click the double-arrow in the middle to swap this round and then choose Select source. This will bring up a dialog to either compare to another project, a Data-tier application (if you have a DACPAC file available) or directly to a live database server.
In this example, I will choose Database and press Select Connection. This might have history in it, otherwise click Browse and either browser local servers if you wish or directly enter the details to connect to your server. Warning: Make sure you click Remember Password here otherwise a bug in Visual Studio might leave you unable to connect to this same data source again! From what I understand, if you don't choose to remember, the history tab will contain a password-less connection which will be used for subsequent connections even if you attempt to browse to it again! It also won't prompt you for the password. If you have done this, you might need to create a new schema compare file, right-click to remove the old item from history and browse it again.
Click OK and then press Compare in the tabbed view to compare the database with your empty project. You will see all of the objects in the source database listed and you can choose which of these to copy into your project. With what you have selected, press the Update button and SQL files will be created in your project to generate the schema required.
Referential IntegritySomething that is both useful and annoying is that the project will not build unless it can understand all of the object references. If you build your new project and look at the errors, you will understand what I mean (unless you have used a simple database which is already OK). A typical error will be User username has an unresolved reference to Login loginname. This will be caused by the SQL script CREATE USER username FOR LOGIN loginname, which was reverse engineered from the database.
of course, this occurs because many of the references are to objects that are outside of your database. For example, any references to sys views, logins or sys tables will all exist in master and will not compile in your project.
If you are never going to create the database from scratch, you can probably just delete all the roles and users but if you reference system items, you cannot build until you add a reference to the master database schema.
If you want everything to work and/or you need to create everything from scratch, you need to create another database project for every external reference including master. If you are referencing logins, they will not exist in the default master reference so you will need another project just for shared logins.
Referencing masterReferencing master (and msdb if needed) is straightforward because you simply right-click references and Add Database Reference, then choose System database and select the one you want to reference. It will then import, which should remove any errors related to references to master objects like sys.views.
Referencing other projectsThe way you reference other projects depends on whether you want them all to be in the same solution (which is easier) or whether you want them to exist and be maintained separately.
To add them into the same solution, they work in the same way as your current project. Create a new project into the solution and either reverse-engineer it or just add types for the references that are broken in your other project. Then add it as a reference right-clicking references and choosing Add Database Reference and then choose Database projects in the current solution. Set a default name for the database and choose a variable name if it needs substituting into deployment scripts.
To reference separate projects, create another solution, repeat the steps above and once completed, build the project and reference the output DACPAC file inside this project by right-clicking references and choosing Add Database Reference and selecting Data-tier Application (.dacpac) option. I don't know if the build will fail if the dacpac is not available on the build server, I suspect it will so I have a pre-build step that copies it from the source bin/release folder into a local XRefs folder and then reference it locally.
Database SettingsSome issues you will only find by trial and error but one thing I found was that the default collation for my database project was not automatically reverse-engineered from the source database. To set database settings for when deploying a new database, you need to right-click the project and choose properties -> Project Settings -> Database Settings of which there are many. Items like contained databases, all the usual file settings and also weird stuff that most of us don't understand. Some of these can be overridden at deployment time via SqlPackage.
Making it BuildWhen I first built the project, there were lots of warnings related to the case-sensitivity of object references, even though SQL doesn't care e.g. MemberID instead of MemberId. You can suppress specific SQL warnings in the Build tab of the project properties in the Suppress Transact-SQL warnings box. Although removing warnings is optional, it is good-practice to either fix things you care about or suppress things you don't. That way, any new warnings are obvious.
You should be able to get the project to build without errors and hopefully without warnings in which case you might decide to either get it building on your CI server or otherwise continue to add post-deployment scripts to populate default data into the tables.
Post-deployment DataIf you are creating a database from scratch, you probably need to add some default data into certain tables. This is quite easy except that the SQL project only allows a single script to do this by default. Fortunately, it can pull in other references as long as it is in SqlCmd mode!
Create an optional folder and add a new "Post deployment script" from the new item dialog. All this does is add an empty script and set its Build Action to PostDeploy. Be very careful that you don't try and add another one since the first one will be set back to None (there should be warning about this).
Once you have added this file, you can add other generic SQL files (ideally in the same folder) and don't set them to PostDeploy. Once this is done, you can then add references to the generic scripts from the post deployment one like this:
If it shows an error, just press the SQLCMD Mode button at the top of the file that has an exclamation mark (a bang) on the end of it. Note that the paths are relative to the current script, not the project.
Top tip: If you want to generate the insert statements automatically from a database, right-click the database in SQL Server Management Studio and choose Tasks -> Generate Scripts. Here you can choose tables to export data for but in the Set Scripting Options page, ensure you select Advanced and make sure Types of data to script is set to Data only and in my case, I disable the generation of the USE DATABASE statement. Also, choose file as the output medium since attempting to use the new query window has a very low limit on memory and will error for anything except the smallest generation. These can be copied into your post-deployment scripts.
Note that separating the data scripts and merging via the single file is optional since the file is treated as a single large file when deployed to a database.
Variable ReplacementSSDT allows you to do variable replacement in deployment scripts, for example, you could change the database name for an external reference, and it uses $(variablename) syntax. There are always variables created for external references, which should be used in-place of hard-coded references. This allows, for example deployment of Database1->ExternalDatabase1 to be done with the same project as a deployment of Database2->ExternalDatabase2.
You can add your own variables also in the Project Settings and access these in the same way.
Building on the Build ServerYou might think that this is simply a case of adding the Data storage and processing workload to the build server and then building with the Built Tools but you would be wrong! For reasons that can only be described as poor, the Build Tools a) does not include all of the files that Visual Studio does and b) Puts the files it does add into a different location. The build will fail.
Fortunately, the errors will be very explicit and will allow you to copy files from either your developer PC and the Build PC into the correct locations to save you installing either an old version of Build Tools or the full version of Visual Studio.
Also note that in my case Team City detected that I needed Visual Studio 2013, which is the version of the solution created by Visual studio 2017 when creating a SQL Server Database project - another very poor situation. I simply told Team City to use 2017 instead, which is fine.
The two files the build is looking for are a) The targets file for the build. That is present on the build server but is installed to another location. Simply find it and copy it to the location that the build says it is looking in. b) The schema definitions for master and msdb which are NOT installed by the build tools and need to be copied from a machine that has all of Visual Studio installed into the location specified by the error. Again, this is relatively easy even though it is annoying.
Deploying from the Build Server/CD ServerAt this point, your individual situation will dictate what you want to do next but currently, I have my build deploying a fixed name instance to a test database server for using in our functional tests. This simply involves the use of SqlPackage.exe which can take the dacpac from the build and do a number of things including copying it, publish, package, report etc. See here for more details.
The use of build variables is up to you but I have success with the following command (in Team City, I am using the Command Line - Executable with parameters task):
C:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin\SqlPackage.exe
The % items are build variables that can be injected. Also, you must provide values for your variables EVEN IF they have default values. It should only call scripts on the main database project so it won't attempt to create new logins that are specified in the ExternalDatabase or master.
Versioning and deploymentI haven't got this far yet but basically you can package the dacpac and whatever else you need into a nuget package like anything else. Mine is only 3MB including the deployment scripts so it won't kill nuget. You can then use things like Octopus deploy to both version and move these packages to the correct environments.
I want to think carefully about variables for these deployments so this can all be as automatic as possible but I will try and update this article later if I remember.