Quantcast
Channel: SQL Server Data Tools Team Blog
Viewing all articles
Browse latest Browse all 111

Migrating a Database to SQL Azure using SSDT

$
0
0

With current interest in cloud computing, migration to SQL Azure is a hot topic.  Migrating a database to the cloud is a significant undertaking, with many architectural challenges to be considered.  High among these is that some features of SQL Server are not, or not fully, supported in SQL Azure, requiring you to identify and resolve any dependencies your database has on these features.

If there are no such dependencies then moving the database itself is largely a database management scenario.   With SSMS in SQL 2012 you can extract a dacpac (schema only) from the database and then deploy it to SQL Azure, or, if you want to deploy the data with the schema, you can use the Deploy Database to SQL Azure feature, new in SQL 2012, which under the hood exports a bacpac and then imports it to SQL Azure.  You could also use SSDT to deploy just the schema but if you’re not making any changes it adds little value.  One thing all these methods have in common is a validation pipeline that blocks deployment if SQL Azure compatibility issues are detected.

If the database won’t deploy without change then you’re into a development scenario and SSDT has plenty to offer.  You will need to identify any unsupported features the database depends on, and then adapt the design of the database (and any affected applications) accordingly.  SSDT lets you work on the schema offline in a database project.  With SQL Azure designated as the target platform, building the project highlights any compatibility issues, which you can then work through, correcting or removing the offending object definitions.  Once the project builds you can deploy and test the database locally using Local DB, the lightweight sandbox database service that comes with SSDT, and then deploy it to SQL Azure to complete your testing in the cloud.

Let’s work through an example to see how this works and explore some interesting options.  We’ll use the Adventure Works LT 2008 database, which is small enough to work through in a few minutes but has enough problems to illustrate the process nicely.

To follow along you’ll need access to a SQL Server 2008 or later release.  You also need SSMS and SSDT from SQL Server 2012.  Or, if you want to skip the SSMS parts of the walkthrough you can just download and install SSDT (which is free).  Next download and install the AdventureWorksLT2008 sample database. You’ll also need a SQL Azure account if you want to deploy the database to the cloud.  You can open a free trial account here.

First let’s see whether the Adventure Works database we’ve downloaded is SQL Azure compatible.  One way to do this is to try and deploy it.  Open SSMS and locate the database and try Tasks > Deploy Database to SQL Azure.  This is one of several DAC-related tasks you can perform on the database.  In SQL Server 2012 SSMS and SSDT support a unified DAC infrastructure, so both tools can process dacpacs created by either tool.

Go through the deployment wizard and pick a minimum-sized Web edition database on your SQL Azure account for this experiment – don’t worry about incurring costs at this point, as the operation will fail. 

While you get a lengthy error list (above right), it’s just a report from the task and can’t be used interactively.  If we switch over to SSDT, however, we can see how we can leverage its integrated SQL Azure validation to help us correct these errors.  We’ll come back to SSMS later. 

First let’s import the database into a SQL Server Database project.  SSDT provides two ‘import’ entry points – you can either create a SQL Server Database Project in the Solution Explorer and then Import into it from a database or, my favorite, you can locate the database in SQL Server Object Explorer (SSOX) and choose Create New Project (below). 

Either way the outcome is the same – a project that fully describes the Adventure Works database. Once you have populated the project you need to change its target platform to SQL Azure on the Project Settings tab of the Project Properties dialog.

With the target platform changed, Build the project to validate the database schema.  At this point you will get the same long list of errors that you saw when attempting to deploy the database in SSMS, only now you can get started on fixing them.  Many errors are systemic – SQL Azure does not support Extended Properties, for example, and as Adventure Works uses these widely this accounts for many of the errors.  Double-clicking on an error in the Error List takes you to the offending code so that you can fix it. Fixing errors this way is very deliberate as you can review and consider each issue individually.  While fixing these errors is an important one-time task which needs to be done thoroughly, it can also be very time consuming.  Let’s look at another way to build the project that can resolve many of the issues more quickly.

Instead of using Import you can use Schema Compare to populate the project selectively from the database.  Schema Compare compares any two schemas and can update one from the other.  Let’s use it to compare the Adventure Works database schema to an empty project.  If you just Compare and then Update Target, Schema Compare will act like Import and produce the same outcome.  However, Schema Compare supports a couple of ways in which you can influence the outcome.  You can review the comparison results and exclude individual actions from the update, which would let you exclude the Extended Properties from each object, for example.  While this is a little easier than deleting them from script after import, it is still a lengthy exercise.  More interesting is Schema Compare’s ability to exclude all objects of a specific type from the comparison, and thus the update.  Let’s see how that works.

First create a new comparison using SQL > Schema Compare > New Schema Comparison.  Then open the comparison Options dialog and on the Object Types tab uncheck the object types you want to exclude.  This lets you exclude, for example, Extended Properties on all objects.  With a single click you’ll cut out hundreds of errors!   Of course it needn’t stop with Extended Properties - there are many object types that SQL Azure doesn’t support and which can be excluded by Schema Compare.  You can see some of these in the Options dialog below with a full list of objects to exclude at the end of this post.

Given how many objects you need to uncheck and that you might want to repeat the process, it’s a good idea to save the comparison after configuring the object list but before setting source and target schemas.  This file can then be used as a migration comparison ‘profile’.  (I suggest working with a copy of the file as it’s easy to overwrite it unintentionally). Now let’s see how we can use our new file.   

First create an empty database project.  Then with a copy of the .scmp file we just created select the Adventure Works database as the source schema and the new project as the target and Compare the two.  The comparison will show a series of Add actions that will be applied if you press Update.  (If you select some of the tables in the results grid – without unchecking the action – you can see in the script display at the bottom that there are no Extended Properties).  When you're ready press Update and the project will be populated without any of the unsupported objects. 

If you now set the project’s Target Platform to SQL Azure and build you will see many fewer errors.  While filtering out unsupported objects cut out most of the problems, there are still some unsupported properties on a few objects that you will have to address.  You will need to fix the following (while the quick fixes below ‘solve’ the problems, the impact of each change would needs to be considered and accommodated in your overall solution long term):

  • Delete the User [NT AUTHORITY\NETWORK SERVICE] which cannot be used in SQL Azure.
  • Remove all uses of ROWGUIDCOL on UniqueIdentifier columns.
    (To fix, go to the first occurrence, select the string and use Find and Replace (Ctrl + H); press Replace All to fix all occurrences in the project). 
  • Remove all uses of NOT FOR REPLICATION.
  • Remove the view [vProductModelCatalogDescription] from the build as it uses XQuery. (Change the Build Action property on the file from ‘Build’ to ‘None’ then Build the project again.  Notice how this preserves the definition to be fixed later.) 
  • Modify the datatype of column ProductModel.CatalogDescription, from XML to nvarchar(max).

At this point you need to build the project again.  This will expose one remaining error caused by the table dbo.BuildVersion not having a clustered index as required by SQL Azure.  If you mark the SystemInformationID column as the primary key this will be fixed.  Build again and you should get no errors.  In as much as we have ‘fixed’ the problems, our schema conversion is complete (you would clearly need to address the storage and handling of catalog description data holistically later).  If you want to see a summary of all that has changed you can open a new Schema Compare and compare the project with your original database.   

You could press F5 at this point to deploy and test the schema locally using Local DB, or Publish it to SQL Azure to test.  If you don’t want to deploy the data from the original database you’re schema migration task is effectively complete.

An alternative at this point is to update your original database to make it SQL Azure compliant and then deploy it to the cloud with its data using SSMS and the bacpac mechanism that we tried earlier.  Let’s step through that approach as it’s rather more interesting and brings us full circle.

First, reset the project’s Target Platform to its original value (the version of SQL Server hosting your Adventure Works database).  This ensures the database will be validated correctly and can be deployed to that target, and also illustrates how switching the target platform can be used to ensure a design is platform neutral.

Next, Publish your changed schema back to the Adventure Works database.  Publishing will update your original database (dropping several objects from the schema and losing some data) so you might want to do this to a copy of the database.  To make such sweeping changes to a database you will need to override two of the default deployment/update options:

Uncheck     Block incremental deployment if data loss might occur
Check         DROP objects in target but not in project

You’ll find these options by pressing the Advanced… button on the Publish dialog (below).

If you open the Data Tools Operations window after publishing you can look at the Preview, which summarizes the changes, and the script actually used.  (Publish also lets you generate a script that you might hand-off to a DBA if you don’t have permission to update the database directly.)

Once the Adventure Works database is updated you should be able to deploy it to SQL Azure with its data.  Go back to SSMS and try the Deploy Database to SQL Azure task again on the updated database, and it should now work like a charm!

Summary

Migrating a database to the cloud is a significant architectural change, often requiring changes are made to the database schema; SSDT and SSMS can provide valuable support during the process.

  • SSDT lets you bring a database schema offline into a project where you can change it to be SQL Azure compatible.
  • Schema Compare can be used to filter out incompatible objects when creating the project.
  • Setting the target platform to SQL Azure causes SSDT to report compatibility errors which can be fixed and tested locally using Local DB.
  • SSDT can deploy the schema to the cloud or update the database so that it can be deployed with its data using SSMS.

Appendix

Objects not supported by SQL Azure to exclude from a comparison:

  • Aggregates
  • Application Roles
  • Assemblies
  • Asymmetric keys
  • Broker Priorities
  • Certificates
  • Contracts
  • Defaults
  • Extended Properties
  • FileGroups
  • FileTables
  • Full Text Catalogs
  • Full Text Indexes
  • Full Text Stoplists
  • Message Types
  • Partition Functions
  • Partition Schemes
  • Queues
  • Remote Service Bindings
  • Rules
  • Sequences
  • Services
  • Symmetric Keys
  • User Defined Types (CLR)
  • XML Indexes
  • XML Schema Collections

Also see the following two links:


Viewing all articles
Browse latest Browse all 111

Trending Articles