I have just been looking at doing something fairly routine - automating backups from SQL Azure so that they can be stored off-site. This is partly for resilience, partly to reduce storage costs and partly to be able to use the backup for reporting reasons and not having to load-up the live database.

There are a surprising lack of such tools in SQL Azure but I found the start of a helpful project here: https://github.com/richorama/SQLDatabaseBackup  which I forked and modified to perform the backup from TaskScheduler on a windows server.

The project in github only does the first bit, which is to copy the database and then export it to a bacpac which can be stored and/or restored onto another machine.

IMPORTANT, you should know the balance between copy/export and export-only, Especially related to charges.

Copy/Export - Since export does not guarantee transactional consistency of your database, you need to use the copy functionality in Azure first. Copy is much slower than export (perhaps 20 or 30 times slower). Even my few hundred kilobyte database took over a minute to "copy" and this is done via SQL with the CREATE DATABASE ... AS COPY OF... Also, for every new database you create, you will be charged a minimum of a whole day of database hosting (even for the few minutes it might exist for) as per the Azure charges. If you copy, for instance, 12 or 24 times per day, you would be charged for 12 or 24 days of database charges PER DAY, the amount depending on the size of your database. This would quickly become very expensive, especially since your database copies are largely transient.

Export Only - Export-only on the other hand is much quicker than copying first and leaves you with something that is potentially transactionally inconsistent. This means you might get half an update of a row for instance. Because the export is made directly to blob storage, you are only charged for the storage of the export and not for another database. Blob storage is charged as an average over the month (and is very cheap anyway) so storing something while you download it is cheap enough.

In my fork of the SQLDatabaseBackup, I copy the blob down to the local file system and then delete the blob, I then drop the current database copy and import the one I downloaded. Currently, since the names are changed each time the blob is created, they are all kept in c:\temp and can be kept/backed up/deleted as required. The whole process takes, perhaps 10 seconds, although the database is small (~700Kb) so I don't know whether this time is largely linear on database size or not.

You need to decide what it is you actually need the backups for. Since SQL Azure uses full transactional logs, it might be enough to use copy/export once per day (effectively doubling your database charges unfortunately) and rely on the cloud resilience for anything in-between, bearing in mind they have backup power-supplies and potentially redundancies and everything else. The other way you could tackle this is a basic cost/flexibility issue. If you are asked for multiple back-ups you might suggest that this comes at a large cost. If you are a bank or such like then the extra cost is peanuts compared to the requirement for backups, otherwise, live with old-school applications where checks and validation protected against some stupidity and training and disciplinaries dealt with others!

If, like me, the purpose is largely for reporting, you might be able to live with the odd inconsistent row and simply export directly to blob storage.

Microsoft have mentioned on some forums the intention to create something more usable/less expensive for regular backups and imho they need this urgently if they expect people to use Azure for heavy business uses. There are no dates or confirmations about when this will happen so don't hold your breath.