T-SQL Tuesday #142: Using descriptive techniques to build database environments

Invitation from Frank Geisler.

In the old glory days back then it was usual that you must deal with one or two or probably three SQL Servers. As you all know these times are over. Through the rise of the cloud, every one of us must deal with more and more systems, not only Infrastructure but also Platform as a Service (PaaS) offerings. The systems themselves are getting more complex through all the new services and technologies that are involved and somehow interconnected. New movements like Azure Arc enabled Data Services bring a whole new aspect to the table where you can easily choose weather to run your data workload on your on-premises Kubernetes Cluster or in the cloud.

All these systems can easily be built with the Azure Portal, but this is not sustainable. Each time you use the portal, you must remember how to build a certain system and – and that is more important – how to apply best practices. For sure you can build e.g., an Azure SQL Database with an open endpoint into the internet and secure this by Firewall settings but this should be done with much caution because you are exposing your database to the internet. A better approach would be to build an Azure SQL Database that does not have a public endpoint but a private endpoint to an Azure V-Net which hosts the systems that must access the database, or which is connected to a local Network via VPN Gateway. As you can imagine there are a lot of moving parts to get such an environment up and running and you must remember (or document) each of these. This is very cumbersome work. There must be a better solution and for sure there is one: Scripting.

When you write a script, you are making your work once and whenever there is the same or a similar situation e.g., deploy an Azure SQL Database best practice, you can just pull out your script and there you go. This can be even taken to another level when you have a parametrization for your script that allows to just put in the parameters and let the script do the rest. Using this as a mantra I developed several scripts to build different cloud environments in PowerShell. This has the big advantage that the environment is documented as you have a script, and that the environment is versioned as well because all our scripts are saved within a source control system. The overall approach is called Infrastructure as Code (https://en.wikipedia.org/wiki/Infrastructure_as_code).

But doing imperative scripting in PowerShell also has its shortcomings. The cloud and the internet in general, is a very uncertain environment. While running the script that deploys your environment many things can happen. Your internet connection can break down, there could be an error deploying your script for whatever reason and so many other things you can think of. So you have to build many conditions in your script: If the resource group exists skip that and just build the Azure SQL Database if the V-Net exists skip that, check if all needed subnets exist and so on. Right? Wrong! Besides the imperative way of telling the Azure Resource Manager what to do you can also use a declarative approach to build resources in Azure.

This declarative approach is very common to everyone who has ever written T-SQL Code. If you write e.g., a query that selects data, you don’t instruct the database system how to retrieve the data from the underlying file structure. You only tell the system how the data you are looking for should look like: Select all the rows of data where the first name is “Frank”. This is the exact same approach that techniques like ARM-Templates, Bicep-Templates or if we are talking about Kubernetes YAML-Scripts take. The scripts are a description of how the target environment should look like. How this target environment is reached is fully up to the underlying System like the Azure Resource Manager. And there is even more: If you are changing an existent environment, only the parts that changed in the script will be altered in the target environment. Say you have an Azure SQL Database of a certain size and you change the size in your Bicep script. Next time you deploy the script the Azure SQL Database will be resized without deleting and redeploying it.

The ideal process of working with Infrastructure as Code would be that the code is checked in to Azure DevOps and that an automatic process will then deploy the changes to your target environment. To change your environment or to add resources you will only have to write the needed changes into your bicep scripts, check them in and let Azure do the magic.

My invitation to you for this month’s #tsql2sday is:

This is my invitation to you this T-SQL Tuesday to think about deploying SQL Component through descriptive Methods and of course to blog about it. It does not matter if you are using Azure and ARM-Templates or Bicep or Kubernetes and YAML. Just write about it and build some new cool Templates that implement some of your best practices infrastructure / environment wise. Or you can write an article on where you have already used descriptive scripts to build environments.

As always there is a whole lot of stuff on the internet you can use as a starting point. I summarize a little bit here:

T-SQL Tuesday #139: The data world is hybrid

Invitation from Ben Weissman.

Our data estates are not just on premises anymore. And they haven’t all moved to the cloud either.

New offerings for a hybrid world like Azure Arc-enabled Data Services pop up but are far from being widely used in production environments.

It has become pretty evident to me, that we’ll be living in a hybrid world for a long time – potentially indefinitely. But what does that mean for our data strategy?

My Invitation

On this month’s #tsql2sday, I would therefore like to invite you all to share your hybrid and edge experiences!

Where are you on your journey?

Which challenges did you face?

How did you address them?

T-SQL Tuesday #103 – Azure SQL Database – Challenges, Pros and Cons, Issues

Invitation and review from Bjorn Peters.

Write what you think about Azure SQL Database

So this is my call for the June 2018 TSQL Tuesday:
Tell me/us if you or your company has already started testing of Azure SQL Database or Azure SQL Managed Instance or if you’re already using it.
Tell us all about it:

  • What was your migration tool?
  • How did you plan that migration?
  • Were there any assessments before the migration?
  • Which problems occurred during your test-phase?
  • Which problems occurred during your migration?
  • Was there any automation around the migration?
  • Which scripting language was used for what? Powershell or Azure CLI?
  • Is there any automation right now during normal operation?
  • Any issues, special requirements or anything else around using Azure SQL Database?
  • How do you monitor the database?
  • How do you do database maintenance?
  • Do you use the builtin tuning options?

Simply write about all of your experiences with Azure SQL Database or Azure Managed Instance. Even if you don’t see a future for Azure SQL Databases write about it everything is welcome!

T-SQL Tuesday #089 – The times they are a-changing

Invitation and roundup from Koen VerBeeck.

This months topic is inspired by the blog post Will the Cloud Eat My DBA Job? by Kendra Little. Technology has changed a lot in the past years, especially with cloud/globalization/automation. What an impact has this had on your job? Do you feel endangered? Or do you have more exciting features/toys to work with? Do you embrace the change and learn new skills? Or do you hide in your cubicle and fear the robot uprising? Let us know how you handle all these changes, or how you feel there are no changes at all for your current job. I’m looking forward to learn how you feel about the future of data management/analysis and how you plan to deal with it.

T-SQL Tuesday #082 – To the Cloud… And Beyond!

Invitation from Jeffrey Verheul.

When Adam asked me if I wanted to host another T-SQL Tuesday, I immediately knew a topic I wanted to talk about: The cloud, and (if you want to) specifically about Azure SQL database.

Last time we blogged about the cloud was back in december of 2013, when Jorge Segarra hosted this monthly party. Since then, “the cloud” (to use that buzz-word again) has changed a lot, and I think the possibilities are endless nowadays.

The reason I would like to see you all blog about this topic, is that I’m working with Azure SQL databases a lot now since I switched jobs in December. Currently I’m working for a small start-up that has a cloud-first focus. This means the main (if possible) host for our (data)platform is Azure. And although hosting our platform in Azure makes it easier for us to scale parts of that platform, it also gives us new challenges to overcome. And I’m probably not the only one with that experience…

So with that said, I would like to give you the opportunity to blog about the cloud, in the broadest sense that you can imagine. I’m imagining really interesting blogs about migrating to the cloud, missing features in Azure SQL database, how you’re determining the cause of and solving performance problems, etc. Please surprise us all with your view on the cloud.

T-SQL Tuesday #048 – Cloud Atlas

Invitation and roundup from Jorge Segarra.

Cloud. It’s the juggernaut buzzword in IT for the last couple of years now. By now you’ve surely been exposed to some aspect of it: Azure Virtual Machines, Windows Azure SQL Databases, Amazon EC2, Rackspace, etc. At this point in the game the cloud solutions are fairly mature and constantly evolving to better serve their customer base.

This month’s topic is all about the cloud. What’s your take on it? Have you used it? If so, let’s hear your experiences. Haven’t used it? Let’s hear why or why not? Do you like/dislike recent changes made to cloud services? It’s clear skies for writing! So let’s hear it folks, where do you stand with the cloud?