T-SQL Tuesday #142: Using descriptive techniques to build database environments

Invitation from Frank Geisler.

In the old glory days back then it was usual that you must deal with one or two or probably three SQL Servers. As you all know these times are over. Through the rise of the cloud, every one of us must deal with more and more systems, not only Infrastructure but also Platform as a Service (PaaS) offerings. The systems themselves are getting more complex through all the new services and technologies that are involved and somehow interconnected. New movements like Azure Arc enabled Data Services bring a whole new aspect to the table where you can easily choose weather to run your data workload on your on-premises Kubernetes Cluster or in the cloud.

All these systems can easily be built with the Azure Portal, but this is not sustainable. Each time you use the portal, you must remember how to build a certain system and – and that is more important – how to apply best practices. For sure you can build e.g., an Azure SQL Database with an open endpoint into the internet and secure this by Firewall settings but this should be done with much caution because you are exposing your database to the internet. A better approach would be to build an Azure SQL Database that does not have a public endpoint but a private endpoint to an Azure V-Net which hosts the systems that must access the database, or which is connected to a local Network via VPN Gateway. As you can imagine there are a lot of moving parts to get such an environment up and running and you must remember (or document) each of these. This is very cumbersome work. There must be a better solution and for sure there is one: Scripting.

When you write a script, you are making your work once and whenever there is the same or a similar situation e.g., deploy an Azure SQL Database best practice, you can just pull out your script and there you go. This can be even taken to another level when you have a parametrization for your script that allows to just put in the parameters and let the script do the rest. Using this as a mantra I developed several scripts to build different cloud environments in PowerShell. This has the big advantage that the environment is documented as you have a script, and that the environment is versioned as well because all our scripts are saved within a source control system. The overall approach is called Infrastructure as Code (https://en.wikipedia.org/wiki/Infrastructure_as_code).

But doing imperative scripting in PowerShell also has its shortcomings. The cloud and the internet in general, is a very uncertain environment. While running the script that deploys your environment many things can happen. Your internet connection can break down, there could be an error deploying your script for whatever reason and so many other things you can think of. So you have to build many conditions in your script: If the resource group exists skip that and just build the Azure SQL Database if the V-Net exists skip that, check if all needed subnets exist and so on. Right? Wrong! Besides the imperative way of telling the Azure Resource Manager what to do you can also use a declarative approach to build resources in Azure.

This declarative approach is very common to everyone who has ever written T-SQL Code. If you write e.g., a query that selects data, you don’t instruct the database system how to retrieve the data from the underlying file structure. You only tell the system how the data you are looking for should look like: Select all the rows of data where the first name is “Frank”. This is the exact same approach that techniques like ARM-Templates, Bicep-Templates or if we are talking about Kubernetes YAML-Scripts take. The scripts are a description of how the target environment should look like. How this target environment is reached is fully up to the underlying System like the Azure Resource Manager. And there is even more: If you are changing an existent environment, only the parts that changed in the script will be altered in the target environment. Say you have an Azure SQL Database of a certain size and you change the size in your Bicep script. Next time you deploy the script the Azure SQL Database will be resized without deleting and redeploying it.

The ideal process of working with Infrastructure as Code would be that the code is checked in to Azure DevOps and that an automatic process will then deploy the changes to your target environment. To change your environment or to add resources you will only have to write the needed changes into your bicep scripts, check them in and let Azure do the magic.

My invitation to you for this month’s #tsql2sday is:

This is my invitation to you this T-SQL Tuesday to think about deploying SQL Component through descriptive Methods and of course to blog about it. It does not matter if you are using Azure and ARM-Templates or Bicep or Kubernetes and YAML. Just write about it and build some new cool Templates that implement some of your best practices infrastructure / environment wise. Or you can write an article on where you have already used descriptive scripts to build environments.

As always there is a whole lot of stuff on the internet you can use as a starting point. I summarize a little bit here:

T-SQL Tuesday #124 – Using Query Store or Not Let’s Blog

Invitation and summary from Tracy Boggiano.

Ever since Microsoft introduced Query Store I’ve been working with it, back to the CTPs in 2016.  I started presenting on it because it benefited my current company at the time.  I heard there are low adoptions rates and from a couple people implementations problems or just not having time to implement it.  After 3 years of presenting on it and writing a book about it I’m curious as to adoption rate of Query Store, but we won’t be writing about that.

For this T-SQL Tuesday, write about your experience adopting Query Store, maybe something unique you have seen, or a how your configure you databases, or any customization you done around it, or a story about how it saved the day.  Alternately, if you have not implemented yet blog about why if you are using 2016 and above, we know why if aren’t on 2016.  If you are unfortunate to be on below 2016 write about what in Query Store you are looking forward to the most once you are able to implement it.  Basically, anything related to Query Store is in for T-SQL Tuesday, hopefully everyone has read up on it and knows what it can do.

T-SQL Tuesday #070 – The Enterprise

Invitation and roundup from Jen McCown.

Here is your invitation for T-SQL Tuesday #70, and the topic is:

Strategies for managing an enterprise

We define “enterprise” in a number of ways, but I tend to default to two definitions: “the things I’m in charge of” and “anything I don’t want to do manually”.  In other words, you don’t need a large shop to have yourself an enterprise. Of course, feel free to modify the definition to what works for you.

So. How do you manage an enterprise? Grand strategies? Tips and tricks? Techno hacks? Do tell.

T-SQL Tuesday #062 – HealthySQL

Invitation and Roundup from Robert Pearl.

So, let’s get this blog party started, and kick off our international Healthy SQL campaign. Let’s spread the word to anyone and everyone managing a SQL Server Database infrastructure of the necessity to perform regular health checks on each SQL Server and repeat often.  The purpose here is to get database professionals, to ensure that all their SQL Servers are healthy, and can pass a health check. It also means that you can prove this (to heaven forbid, auditors), and back it up with documentation.

If you want to excel in your career as a data professional or DBA, then you need to be concerned about your companies’ SQLFitness.  Therefore, I am inviting all of you, to blog about your T-SQL Resolution, and describe what it is that you will do this year to make sure your SQL Servers are healthy and fit.  Now, it’s ok to ponder Healthy SQL in the abstract, but we’re looking for some technical tips on things a DBA should do to keep your SQL Servers performing well.

It could be something as simple as implementing a new monitoring software or script, updating all your SQL Servers to the latest version or service pack, setting up maintenance and optimization jobs, HA/DR, creating a performance baseline, capturing performance stats, (ie: DMV automation scripts, or MDW), a checklist ,etc.  Sky is the limit, as long as you can contribute something to the SQLCommunity that can be used in the effort to ensure SQL Fitness.

T-SQL Tuesday #040 – Files and Filegroups

Invitation and roundup from Jen McCown.

Hold on! I already hear some of you shouting, “Boooooring!!!”  You don’t have to write a file and filegroup primer, if you don’t want to. Get creative, get tangential! Some of my suggestions:

  • Maybe you’d like to talk about partitioning indexes or tables across filegroups
  • Or performance benefits (still!) of assigning tables to specific drives, via filegroups
  • Or FILESTREAM, specifically (hey, that’s something that requires a specialized filegroup!)
  • Or some horrible misuse of filegroups you’ve seen once  (we always love a SQL horror story)
  • Or, if you have a mind, a file and filegroup backup/restore primer! (Why not? I’m not the only one who loves basics.)
So talk to us, tell us all of your file and filegroup lore!

T-SQL Tuesday #33 – Trick Shot

Invitation and roundup from Mike Fal.

For a while, I was in an amateur pool league.  No, not the one involving water and swimming, but where you try to sink balls into pockets.  It was a lot of fun and is a challenge both for your motor skills as well as your strategy.  I still shoot from time to time, as well as hang out with my old pool buddies.

One thing guys would get into is trick shots.  Two and three rail bank shots, masse shots, or jumping the cue ball to hit the target.  Most of these shots weren’t tournament legal, but they were fun to try and nice to impress the ladies.  More than that, they were a tool to teach you the physics of your pool game.  You could see how throw and English could affect your shot, or how balls would behave after impact.

Just like so many other things I do in my life, the trick shot lessons translate over to SQL Server.  How many times have we built something neat or puzzled out a particular bit of logic that, while it may not have been particularly useful, taught us about how SQL Server behaves.  This month’s T-SQL Tuesday is all about this and the assignment is two-fold:

  1. Show us a cool trick or process you developed, maybe a DMV you used or some reporting logic you created.  It doesn’t have to be useful, just something that you thought was pretty neat.
  2. Tell us what you learned from this trick.  Is it something about an oddity in SSRS?  Maybe with the query processor?  Whatever you did, tell us how it gave you insight in to how SQL Server works.

T-SQL Tuesday #015 – Automation in SQL Server

Invitation and Summary from Pat Wright.

Having taken part in several T-SQL Tuesday’s I decided I would finally put my name in to host one.  I figured it would be a good way to lose my sanity learn some great ideas from this wonderful SQL community.  I figured that since many of you out there set a goal this year to blog more and to learn Powershell then this Topic should help in both of those goals.    So the topic I have chosen for this month is Automation!   It can be Automation with T-SQL or with Powershell or a mix of both.  Give us your best tips/tricks and ideas for making our lives easier through Automation.  Now here are all the details you’ll need for a successful T-SQL Tuesday post!