Yet another simple attempt to document my continuing adventures with the Windows Azure Platform. In today’s edition, our hero attempts to host a simple on-premise database to SQL Azure and access it from an on-premise application.
Create our SQL Azure Server/Database
I’ve covered getting Windows Azure platform benefits in another post. So we’ll presume you have already either purchased a subscription for SQL Azure or claimed any benefits you’re entitled too. Now my MSDN Premium subscription benefits include 3, 1gb instances of SQL Server. Today I’m going to setup my SQL Azure server and create one of those instances.
I start by pointing my favorite browser to https://sql.azure.com and signing in with the Live-ID that is associated with my subscription. Once signed in, you should be at the SQL Azure Developer Portal’s landing page (as seen below) with a list of “projects” (aka subscriptions) we have associated with my Live-ID displayed.
You’ll click on a project project and if prompted, accept the terms of service. Since this is my first time logging into my SQL Azure subscription, I need to start by creating a SA (system admin) account and specifying a region for my database. The region selection here is important for reasons I’ll get to later in this article.
So fill in the form, select a location (aka datacenter), and click on “Create Server”. After a few seconds, our server has been partitioned and we have a master database already created within it. The only thing that was lacking here is that I really would have liked to been able to try and provide a unique name for my server. Oh well. :)
At this point we need to go to the “Firewall Settings” tab and adjust the security settings for our database. If we don’t, we won’t be able to connect to it. These settings will apply to all database instances within our SQL Azure Server. For now, I’m going to set it to allow for any IP to connect. This is not something I would generally recommend. It would be a good practice to add settings into this to only allow connections from our application, network submask, or maybe even only from Microsoft hosted services. But I’m feeling a touch lazy today.
Once we’ve given our firewall settings a few minutes to be applied, select the “master” database instance and test connectivity. Once verified, we’re ready to get connected.
Connecting SQL Azure
There are already many blog posts available that explain how to get SQL Server 2008 Management Studio to connect to SQL Azure. However, if you tag the SQL Server Management Studio 2008 R2 CTP you’ll find this less problematic. Using this version, it’s as simple as putting in your server name (available from the portal and in the form of <somevalue>.database.windows.net) into the login box along with your Administrator username and password.
If you can’t use the R2 CTP, you can find alternatives to get SQL Server 2008 Management Studio working in various blog articles.
Once that connection is established, I can run a couple exported SQL Scripts I already have to create my database. I generated these scripts from a local SQL database. In my case, the scripts required only two minor changes before I got them to run successfully.
Connecting to it from the Visual Studio 2010 RC’s Server Explorer is just as simple. Same goes for having your applications connect, you just use a connection string and you’re in. The only thing all these methods have in common is that you need to have port 1433 open for outbound traffic and the IP address you are connecting from has to be allowed via the SQL Azure firewall settings.
I started by creating a new windows forms project and adding a data source to it. I could have set this using a connection string I generated from the portal or via any preconfigured data connections. I then added that data source to my form and launched the project. Simple as pie! Without writing a single line of code I could connect to and update my SQL Azure database from an on-premise application.
Why was that region selection important?
I said I’d get back to this and I have.
The reason that the region selection is so important is because you don’t get charged for bandwidth within a datacenter (region). Additionally, by locating your database within the same center as your application, you reduce any connection latency. Yeah, we could host the DB in Asia and run the app in the US, but that’s only going to slow down performance AND cost us more. So there’s no reason we should do it. This is also why you may want to run a local copy of the database when doing development. No sense paying bandwidth costs for accessing a hosted server when a local copy of SQL Express will do the job nicely.
But wait, what about the infamous “smoking hole” disaster recover scenario? Does picking another region affect my disaster recovery plan? The short answer is “no”. People with a greater insight into how SQL Azure is built tell us that there are 3 copies of your database are automatically created and maintained and that at least one of those should be geographically diversified. If your database crashed, or the data center itself is destroyed by a giant radiation mutated lizard, a backup copy will be activated and the logs applied.
In fact, you may not even notice if a simple database crash takes place. We’ll leave the question of “but don’t I need to know” for another day.
Some closing thoughts
In going through this exercise, there are a couple things that become apparent to me. First off, there are going to be some that will detract SQL Azure as being too “dumbed down”. I’m admittedly not a DB guy. I know enough to stay away from bad practices, create the stored procedures/functions I need to do my job, and can tell the difference between a physical and logical database schema. Hell, I’ve even been known to help debug a query performance issue on rare occasion. However, I am by no means someone that enjoys spending their time optimizing databases and monitoring their performance. As such, SQL Azure does a good job of meeting my basic needs. In its current state, it may not be an enterprise level solution. But you, I think it makes a pretty decent operational data store that would serve a simple application pretty well.
For those that are really into RDBMS systems and love tweaking and tuning them like a hot-dog prepping for the quarter mile, they are likely to be disappointed by SQL Azure. However, I’m confident that the SQL Azure team is committed to this product. We’ve already seen the 1.1 version released with changes based directly on feedback they’re received from the community. I also believe that the current size limits are based more on ensuring they have a stable service offering then on any real limitation. It wouldn’t surprise me if we see SSRS and +25gb instances available before the end of the year. And I don’t think its too far fetched to predict that +1tg instances will happen eventually.
I’m also getting more of an idea of what MSFT may be thinking when it comes to the online subscriptions. This is admittedly a fairly new area for them (compared to license based software distribution) and as such is subject to change, but it really strikes me that they are trying to encourage folks to tie given applications to a specific subscription rather than have multiple projects within a single subscriber account. It could go either way, but for enterprises that will be interested in billing charges back, it makes sense to have a subscription for each “project” for simplicity.
I guess only the future will really tell the tale.
