T-SQL Tuesday #145: The Pandemic, Costa Rica, and Events

Invitation from Xavier Morera.

Welcome back to this blog party tradition that has been going strong for years!

I am really happy to be hosting this month and since we are in the middle-ish (hopefully closer to the end) of a pandemic, I would like to ask you the following question:

How much do you love meeting in person, where would you like for your next event to take place, and why Costa Rica?

I am no stranger to in-person events. In fact, I’ve spent a good deal of my life traveling all over the world, teaching technologists from all kinds of companies – big and small – on a wide range of subjects.

Some of the places I have traveled are fantastic for that real-world interaction that we all need.

Yes, remote work is nice and many companies and employees have indeed found out that you can actually work from home in an efficient manner.

However, IMHO, there is no replacement for that feeling of walking into the presentation hall, having the chance to talk to experts and meet new and interesting people that are most likely having the same problems as you, or that are trying to change the world one application/solution at a time.

Now, help me by answering these questions:

  1. Which is your favorite conference and why?
  2. Which is the best venue that you have visited for a tech conference?
  3. Who is the best presenter that you have ever listened to?
  4. Which location would you like for your next event to take place and why Costa Rica?

Let me know what you think!

T-SQL Tuesday #144 – Data Governance reimagination

Invitation from Dr. Victoria Holt.

Data governance is a topic that has raised its head again in the last year, with the introduction of a GA service called Azure Purview.  Data governance is not a new topic in the realms of data management.  I think over the last few years data governance has had a focus on meeting data protection law, government legislation and formalised control and standards. Then with data sovereignty issues being everyday considerations within the global market place, storage locations in the cloud and the likes of the General Data Protection Act, data governance has been very focused on meeting legislative requirements.

There has been a substantive cost involved in setting up data governance within an organisation to avoid those heavy fines if a data breach were to occur. I think because of this many organisations compartmentalise personal data systems.

A change in perspective, a reimagination, of data governance is occurring. Data governance is really about ‘data erudition’, showing an interest in learning about the data we have, improving the quality and creating a more productive and trusted data asset. Starting small and incrementing data change in a way that matches the business need, is a way to gain targeted business value. Azure Purview provides this great opportunity for us to start at the beginning and create a data catalog, data inventory, data dictionary and much more in an automated way.  Trusting the data quality, knowing your risks and what data you have sets your business up for success.

My invitation to you for this month’s #tsql2sday is…

I want to invite you to share your experiences on data governance

  • The current cost of data governance versus its benefits
  • The amazing things data governance has enabled you to achieve or will enable you to achieve in the future
  • The potential uses for Azure Purview within your estates and the automated deployment options for that

T-SQL Tuesday #143 – Short code examples

Invitation and wrap-up from John McCormack.

T-SQL Tuesday this month is going back to basics and its all about code. I’d like to know “What are your go to handy short scripts”?

What are those little short bits of code that you can’t live without? I’m talking about little snippets that are only a few lines, that you may even have memorised. It could be T-SQL, PowerShell, Python or anything else you use day to day.

e.g. I manage a lot of SQL agent jobs. Quite often, I need to find out which job has a certain t-sql string in the command so I’ll run:

SELECT * from msdb..sysjobs sj 
JOIN msdb..sysjobsteps sjs 
on sj.job_id = sjs.job_id 
where sjs.command like 'backup log%'

Of course, there are many other ways to find this out including DBATools commands but sometime I just revert to memory for convenience.

Another one I like is to get the estimated completion rate of a backup or restore. Now there are better scripts than this but sometimes, nothing beats getting a quick estimation back from a couple of lines of memorised t-sql.

SELECT percent_complete pc,*
FROM sys.dm_exec_requests
order by pc desc

My invitation to you for this month’s #tsql2sday is…

I would like you to share with the community what your go to script snippets are and why you find them useful. By sharing these, you will undoubtedly be helping someone who hasn’t thought of doing it that way, and hopefully you’ll pick up some handy hints as well.

  • Any language is fine, not just t-sql
  • Please share as many as you wish
  • Perhaps you never do this and always work off saved scripts or convert your snippets to stored procedures? Tell us why this works for you.

T-SQL Tuesday #142: Using descriptive techniques to build database environments

Invitation from Frank Geisler.

In the old glory days back then it was usual that you must deal with one or two or probably three SQL Servers. As you all know these times are over. Through the rise of the cloud, every one of us must deal with more and more systems, not only Infrastructure but also Platform as a Service (PaaS) offerings. The systems themselves are getting more complex through all the new services and technologies that are involved and somehow interconnected. New movements like Azure Arc enabled Data Services bring a whole new aspect to the table where you can easily choose weather to run your data workload on your on-premises Kubernetes Cluster or in the cloud.

All these systems can easily be built with the Azure Portal, but this is not sustainable. Each time you use the portal, you must remember how to build a certain system and – and that is more important – how to apply best practices. For sure you can build e.g., an Azure SQL Database with an open endpoint into the internet and secure this by Firewall settings but this should be done with much caution because you are exposing your database to the internet. A better approach would be to build an Azure SQL Database that does not have a public endpoint but a private endpoint to an Azure V-Net which hosts the systems that must access the database, or which is connected to a local Network via VPN Gateway. As you can imagine there are a lot of moving parts to get such an environment up and running and you must remember (or document) each of these. This is very cumbersome work. There must be a better solution and for sure there is one: Scripting.

When you write a script, you are making your work once and whenever there is the same or a similar situation e.g., deploy an Azure SQL Database best practice, you can just pull out your script and there you go. This can be even taken to another level when you have a parametrization for your script that allows to just put in the parameters and let the script do the rest. Using this as a mantra I developed several scripts to build different cloud environments in PowerShell. This has the big advantage that the environment is documented as you have a script, and that the environment is versioned as well because all our scripts are saved within a source control system. The overall approach is called Infrastructure as Code (https://en.wikipedia.org/wiki/Infrastructure_as_code).

But doing imperative scripting in PowerShell also has its shortcomings. The cloud and the internet in general, is a very uncertain environment. While running the script that deploys your environment many things can happen. Your internet connection can break down, there could be an error deploying your script for whatever reason and so many other things you can think of. So you have to build many conditions in your script: If the resource group exists skip that and just build the Azure SQL Database if the V-Net exists skip that, check if all needed subnets exist and so on. Right? Wrong! Besides the imperative way of telling the Azure Resource Manager what to do you can also use a declarative approach to build resources in Azure.

This declarative approach is very common to everyone who has ever written T-SQL Code. If you write e.g., a query that selects data, you don’t instruct the database system how to retrieve the data from the underlying file structure. You only tell the system how the data you are looking for should look like: Select all the rows of data where the first name is “Frank”. This is the exact same approach that techniques like ARM-Templates, Bicep-Templates or if we are talking about Kubernetes YAML-Scripts take. The scripts are a description of how the target environment should look like. How this target environment is reached is fully up to the underlying System like the Azure Resource Manager. And there is even more: If you are changing an existent environment, only the parts that changed in the script will be altered in the target environment. Say you have an Azure SQL Database of a certain size and you change the size in your Bicep script. Next time you deploy the script the Azure SQL Database will be resized without deleting and redeploying it.

The ideal process of working with Infrastructure as Code would be that the code is checked in to Azure DevOps and that an automatic process will then deploy the changes to your target environment. To change your environment or to add resources you will only have to write the needed changes into your bicep scripts, check them in and let Azure do the magic.

My invitation to you for this month’s #tsql2sday is:

This is my invitation to you this T-SQL Tuesday to think about deploying SQL Component through descriptive Methods and of course to blog about it. It does not matter if you are using Azure and ARM-Templates or Bicep or Kubernetes and YAML. Just write about it and build some new cool Templates that implement some of your best practices infrastructure / environment wise. Or you can write an article on where you have already used descriptive scripts to build environments.

As always there is a whole lot of stuff on the internet you can use as a starting point. I summarize a little bit here:

T-SQL Tuesday #141: Work/Life Balance?

Invitation and wrap up from TJay Belt.

Having gone thru this last year of strange upheaval around the world, I’ve been thinking about this topic quite a bit lately. My company has a mentor program where one can sign up and discuss this topic. I signed up. I’ve had several ‘sessions’ with my mentor. We’ve had some productive conversations about what this all means to me, my career, my skillset, and so on.
As I transitioned to work at home, some 500 days ago, it was a shift in perspective, timing, and many things. Trying to keep those scales balanced has been a focus and a struggle from time to time. Especially recently, but throughout my career, it’s a topic I often lapse back into. Trying to find that balance.

My invitation to you for this month’s #tsql2sday is…

I want to invite you to share your experiences dealing with your own personal journey with Work/Life Balance, either thru experience, or hopeful future goals…

  • What are the cool things have you learned?
  • What things did you try, that simply didn’t work out? 
  • What rules have you implemented that made it easier to balance?
  • What rules made it harder to find that balance?
  • What are tips or tricks can you share to help others?

T-SQL Tuesday #140: What have you been up to with containers?

Invitation and wrap-up from Anthony Nocentino.

In recent years containers have come into the data platform world, exposing new technologies to data professionals. Microsoft put SQL Server in Linux, and shortly after that, SQL Server made its way into containers. SQL Server in Containers has become the foundation for things like Big Data Clusters and Azure Arc-enabled Data Services

My invitation to you for this month’s #tsql2sday is…

I want to invite you to share your experiences using containers and not just SQL Server in containers…

  • What are the cool things you’ve done with containers in your environment, test lab, or even presentation demos?
  • Are you using containers in production? If so, what are the tips or tricks you can share to help others?

If you haven’t tried containers yet…here’s a video showing you how to do the following…

  • Deploy a SQL Server in just a few minutes!
  • Connect to your container-based SQL Server.
  • Upgrade a container-based SQL Server to a new version.

So, if you haven’t used containers before, go ahead and try out the demos from this video, which you can get here, and write about your experience!

T-SQL Tuesday #139: The data world is hybrid

Invitation from Ben Weissman.

Our data estates are not just on premises anymore. And they haven’t all moved to the cloud either.

New offerings for a hybrid world like Azure Arc-enabled Data Services pop up but are far from being widely used in production environments.

It has become pretty evident to me, that we’ll be living in a hybrid world for a long time – potentially indefinitely. But what does that mean for our data strategy?

My Invitation

On this month’s #tsql2sday, I would therefore like to invite you all to share your hybrid and edge experiences!

Where are you on your journey?

Which challenges did you face?

How did you address them?

T-SQL Tuesday #138: Managing Technology Changes

Invitation and wrap-up from Andy Leonard.

One point I make (repeatedly) in my latest book – titled Building Custom Tasks for SQL Server Integration Services – is “software changes.” In fact, software changed on me between the completion of editing and the release of the book! I wrote about the changes in a post titled Building Custom Tasks for SSIS Second Edition Errata, Chapters 1-9. There’s a live stream video at the bottom of that post, as well.

Changing software inspired this month’s T-SQL Tuesday topic:

“How Do You Respond When Technology Changes Under You?”

T-SQL Tuesday #137: Using Notebooks Every Day

Invitation and wrap-up from Steve Jones.

Jupyter Notebooks

I first heard about Jupyter Notebooks years ago. At the time I was just getting started in Python and I thought these were a great way to share code online with others. However, the setup and administration was a pain, and I quickly gave up.

When I saw Microsoft add notebooks to Azure, I got slightly excited again, but once again, it was a bit of a pain to work with these. Too much administrative overhead, in my opinion.

That changed a bit with Azure Data Studio, which has seen notebooks get added, and become easier and more stable. If you haven’t tried notebooks, get ADS and give it a try today. I even wrote a short piece on using these.

Your Invitation

For this month’s T-SQL Tuesday, I want you to write about how you have used, or would like to use, a Jupyter notebook. This seemed to be exciting for many people at first, but I haven’t seen a lot of uptake from users in general. So I’m curious if you are using them.

A few resources for you:

There is even a way to run these from PowerShell.

So let me know how you are using notebooks, or your plans for the future.

T-SQL Tuesday #136 – Blog About Your Favorite Data Type (Or Least Favorite)

Invitation and wrap-up from Brent Ozar.

Your mission: write a blog post about your favorite data type, and schedule it for next Tuesday, March 9.

Mad about money? Feverish about float? Tell us what you use your favorite data type for, and what people need to know before they get started using it. Data types have so many interesting edge cases, like when Aaron Bertrand taught us to use lower case for our data types. (I still don’t do that, but for the record, I do feel guilty.)