T-SQL Tuesday #133: What (Else) Have You Learned from Presenting?

Invitation and wrap-up from Lisa Griffin Bohm.

This month, I’d like those of you who have presented, or written a presentation, to share something technical THAT DID NOT RELATE to the topic of the presentation, that you’ve learned in writing or giving the presentation. This can include a work presentation, for those of you who haven’t spoken at an event! 

Why do I ask?  Well, if any of you have heard Allen White (b|t) speak on any topic, he will start his presentation speaking about how much PASS has contributed to his life and career. He will then talk about how every person has a story to share, and there will be a piece of that story that no one has ever heard before. We ALL have something to learn from each other. He will also tell you that you will learn more than you ever thought just by writing and presenting on ANY topic, as well as answering questions!  So, I’d like folks who have done this to share and encourage – especially the new folks who are on the fence about presenting. Feel free to go more in-depth with some of the technical details of your learning than I did!

I will share a couple of stories about my own experiences. My first presentation at our local PASS user group was a lightning talk about a bizarre performance hit we took from a foreign key with an ON DELETE CASCADE option. It was a rare scenario, so I figured most folks hadn’t run into this.

As I was researching foreign keys in general, I realized… hey!  Did you know there were other options for updating and deleting data tied to foreign key constraints? I don’t know that I’ve paid attention to that for the last… at least 10 years. I’ve never seen any of these options used in code I’ve maintained or identified as problematic.  You can specify default behaviors for both delete and update options – for example, if I update the value that the foreign key constraint points to, I can change all those foreign key values to be NULL. Or to update that foreign key value to the new value. Or… not do anything. There are ramifications for all of the choices, but I could definitely see how they might be useful in certain scenarios.

Also, people were interested enough in the topic to ask about indexing on some of the different fields, and asked me to expand this into a full presentation. I did so and presented at our next local SQL Saturday, including testing with the indexes that were recommended to me.

Learning summary:

  1. tables may show up in the execution plan even if they weren’t in the query AT ALL (look for constraints!)
  2. there are other options for coding foreign keys to handle updates/deletes to the primary key data that could be useful elsewhere
  3. Reinforcement: Indexes with low cardinality don’t always improve performance!

I also wrote a book on refactoring legacy T-SQL. I was talking about functions, and how SQL Server’s STATISTICS IO won’t show IO that a function performs. I remember thinking, “I CANNOT write a book in 2019 that talks about running a trace!”  So, I started using simple Extended Events to show the increased IO for functions that the native SQL Server STATISTICS IO sneakily avoids. I need to start making more use of Extended Events but this was a great .. kick in the pants to get going on those, as well as being able to code a very simple, more modern solution to share with readers on how to more accurately find IO stats of the SQL you’re running/tuning!

I could continue about buying a “lab” laptop, re-learning how to use Oracle Virtualbox, dealing with networking (not one of my strengths), using SQL Server core (sooo fast to install!), and all of those fun things. But, there will be future blog posts on those topics, and it’s time for others to share THEIR stories and experiences!  Thanks for reading!

T-SQL Tuesday #132: How Are You Coping with Pandemic?

Invitation and roundup from Taiob Ali.

This month we will share how each of us is coping with a global pandemic.

2020 is a unique year for all of us. We are living through a worldwide crisis that most of us have not seen or experienced. It has touched almost all aspects of our life. Depending on your location, social interaction has been reduced to almost none for many of us. Our home has become an office, school, day-care center, playground. Many of us lost our family members due to this pandemic. PASS Summit 2020, SQL Saturday events, monthly user group meetings are happening virtually. Most of us who were working in an office building are working remotely. Travel has become a thing of the past. Every individual’s challenges are different. I want all of you to share your experience (with whatever you are comfortable with), how you are dealing with these new challenges, and tips you have for others going through the same.

The enormous scale of the crisis and the impact it is having are naturally causing a lot of fear, uncertainty and anxiety across the globe. Add social isolation, disrupted work and family routines, cabin fever and economic instability, and it is understandable that our mental health is suffering. In a recent survey by the Kaiser Family Foundation, 45% of adults feel that worry and stress related to coronavirus has had a negative impact on their mental health.

https://www.kff.org/health-reform/report/kff-health-tracking-poll-early-april-2020/

These are few pointers (do not be limited by these) to help you get started.What new challenges were you facing at the beginning of this year?

  • What new challenges were you facing at the beginning of this year?
  • What are the things you had to adjust to deal with these new challenges?
  • What tips you have for others that helped you in coping with the changes during this pandemic?
  • What are the areas you could not find a solution to and that you wish someone can help you?
  • Challenges you noticed, among others, and how they dealt with?
  • Are there new opportunities opened up for you as things shifted to remote, digital?
  • Stories of kindness during this crisis.
  • Anything else you want to share?

We will focus on three areas to share our thoughts. For many of us, separating these three might not be possible, as each one depends on the other. You can write about one or all three areas. You are also welcome to share anything outside of these three areas.

  • Mental health
  • Physical health
  • Professional growth

T-SQL Tuesday #131: Data Analogies, or: Explain Databases Like I’m Five!

The current invitation is from Rob Volk.

Welcome to T-SQL Tuesday! This month we’ll explore analogies, or how would you explain database concepts to someone who’s not technologically savvy.

If you’re a data professional, you’ve probably been asked “So what exactly do you DO?”, and probably not found a great way to answer. Database theory and practice can be hard, and every professional struggled with their first computerized data concepts.

While attending and presenting sessions on SQL Server and other data topics, I’ve found analogies often explain a concept better than any computer science or mathematical description does. We’ve probably heard that “a clustered index is like a phone book”, and those old enough to remember phone books will agree, but it doesn’t really explain why that structure is useful. (hint: it requires sorting)

I recently found a Youtube video that describes how binary search works, and why sorting makes it work. As this is a foundational concept for most database indexing schemes, I think it does a great job explaining why it’s more efficient. What I like most about it is that it doesn’t demonstrate this on a computer, but uses cards with different values (even if they’re not physical cards).

Brent Ozar recently posted a video on statistics that DOES use actual playing cards. I’ve also seen Aaron Cutshall use cards in presentations on Set Theory.

For this month’s T-SQL Tuesday, I’d like you to write about your favorite analogies that help explain database concepts to people who aren’t database experts. I like the spirit of Reddit’s Explain Like I’m Five (ELI5) subreddit, where the explanations are meant for someone who has no previous assumptions about the topic.

ELI5 doesn’t preclude deeper examples. For example, I posted a long time ago one version of my “cursors are tweezers” analogy to explain why you shouldn’t use them. You can not only explain why something works, but also why something else DOESN’T work, and I welcome such examples if you have the space and time.

T-SQL Tuesday #130 – Automate Your Stress Away

Invitation and recap from Elizabeth Noble.

Life as a data professional is stressful. This year is even more stressful. We have so many responsibilities and so may demands coming at us every day. I’ve found over the years that I love the stress, but I also want to make my life and the lives of those around me easier, calmer, more peaceful. I can’t change everything about my job or what is expected of me. After a particularly stressful summer many years ago, I wanted to figure I could change in my day to day tasks. How could I make my life easier?

At the time, the largest hurdle I had was around deployments. Our deployments consisted of a collection of SQL scripts collected from individual user stories. Each script was executed separately as we didn’t have a good process for what to do if any particular script did fail. In addition, we created rollbacks for each script that was deployed. The process worked for us for quite some time until one day we deployed our first new application in years.

Eventually, we moved a handful of our databases to source control. That part was easy as you can use Visual Studio and SQL Server Data Tools (SSDT) to import a schema for an existing database. The next steps took quite a bit of communication between the teams. A few missteps later, we had 2 of our 10 or so databases deploying through Continuous Integration Continuous Delivery (CICD) pipeline. We still deploy once every two weeks, but our deployments our generally quicker and less tedious.

The time savings is nice. We have about 24-26 deployments a year, and we easily save at least an hour on average per deployment. That’s a full day a year! But the best part for me is the next day. I still check my email as soon as I wake up to ensure that aren’t any issues reported, but even if there are issues, they are usually quickly resolved and we go on about our day.

My invitation to you is I want to know what you have automated to make your life easier? This can be anything creating a SQL Server Agent job to automated running a daily report or using dbatools to manage your servers. I’m curious what challenges you’ve found at your job and what you’ve done to make things better. If you haven’t had a chance to automate some part of your job, what would you like to automate and what are your hurdles? If you’re interested in some help or advice, let us know. I love #SqlFamily, and I’d love to see what we can do to help out.

T-SQL Tuesday #129 – Let’s Build a Time Capsule

Invitation and wrap-up from Tamara Clark.

So here we are finishing up summer. Last fall when Steve asked me to host I had this great idea…now with all of the current affairs my idea seems less than ideal. So here comes my new good idea, thanks Arlene for the help.

I want to build a time capsule, maybe we all escape to a new planet, maybe the earth just withers and dies, or maybe this is just an exercise in being positive. Tell me what few items or bits of information you’d like me to put into the #SQLCommunity time capsule. What do you want others to know about the #SQLCommunity and #SQLFamily? What do you want the finders of this time capsule to know about our current goings-on? What would you like the finder to know about you, how would you like to be remembered? Even if it’s a piece of rockin’ code that you think should live on forever, include it.

And don’t worry, you can put anything you want in our time capsule: personal, technical, not #SQLFamily related. It’s my month and I say you can do whatever you want!

T-SQL Tuesday #128 – Learn From Others

Invitation and recap from Kerry Tyler.

Pilots do something that a lot of non-pilots will find fairly weird if not outright horrifying: We read accident (“crash”) reports. Some of us spend a lot of time reading accident reports, actually. Officially “Accident Reports”, these are put out by the US National Transportation Safety Board (NTSB) after investigation into a crash or an “incident.” In addition to aviation-related reports, there are highway and railroad reports, and even hazardous materials incidents.

Reports come in two flavors, a “preliminary” report, and ultimately, a “final” report after the investigation has completed. The final reports includes such items as conclusions and the probable cause of the accident or incident. To make life easier, they also include a Recommendations section, which, well, includes recommendations for how to keep this type of accident from happening in the future. These tend to be regulatory in nature, as they are geared towards the FAA.

The search form for aviation reports is here–https://www.ntsb.gov/_layouts/ntsb.aviation/index.aspx–if you’re, uh, thinking you want to get into this sort of thing.

Why do pilots do this? The rationale is pretty simple: To learn from the mistakes of others. Or, to learn how a bad day was kept from becoming a worse day after something broke.

What Does This Have to Do With SQL Server?

Great question. Besides the fact that I think piloting airplanes and DBA-ing are the same job, just with different scenery,  I wish we had this kind of transparency in the IT world when things went wrong. When a corporation has a big security incident, we’re likely not to hear a lot of details publicly about what went wrong and what was done to mitigate similar attacks in the future. This kind of information could help everyone. This is one of the things that cloud providers do quite a bit better: When something breaks, we get good information on what happened, why, and what’s going to be done about it. Of course, this is done because public cloud providers basically have to–if things went down a lot and we never heard why, that provider probably wouldn’t have a lot of customers for very long.

This brings me to T-SQL Tuesday.

Tell me (us all, obviously) about something recently that broke or went wrong, and what it took to fix it. Of course, the intent here would be for this to be SQL Server-related, but it doesn’t have to be. We can all learn from something going wrong in infrastructure-land, or how there was a loophole in some business process that turned around and bit somebody’s arm. It doesn’t even have to be all that recent–maybe you’ve got a really good story about modem banks catching on fire and that’s how you found out the fire suppression system hadn’t been inspected in years. Just spitballin’ here. If you’ve got an incident whose resolution can help someone else avoid the same problem in the future or improve a policy as a preventative measure, let us hear about it.

TSQL Tuesday #127 Invite – Non SQL Tips and tricks

Invitation from Ken Fisher.

How about for this months TSQL Tuesday let’s do another month of tips and tricks. But just to be a bit different, nothing to do with SQL Server (in fact let’s say nothing related to a DBMS). No SSMS tricks, no T-SQL tips, something completely unrelated. For example did you know that, in Windows, if you grab a window and shake it then all of the other windows will be minimized? And then if you shake it again they all pop back up.

So give me a non SQL related tip or trick. OS (windows, mac), Cloud (Azure, AWS), etc. Something you think will be handy for the rest of us.

What is T-SQL Tuesday

T-SQL Tuesday is a blog party started by Adam Machanic (b/t) over a decade ago and now maintained by Steve Jones (blog|twitter) on tsqltuesday.com. The first Tuesday of each month a blogger hosts the party and suggests a topic. Then anyone who is interested blogs on that topic. It can be a lot of fun and quite a challenge to blog on a topic you didn’t pick. Not to mention the time limit. I didn’t mention the time limit? Oh, well, you have one week to write your blog and then it has to be posted on the second Tuesday of the month.

T-SQL Tuesday #126 – Folding@Home

Invitation and recap from Glenn Berry.

Introduction

This month’s T-SQL Tuesday is devoted to the Folding@Home (FAH) distributed computing project. This project was originally launched by Pande Lab at Stanford University on October 1, 2000. It is currently based at Washington University in St. Louis, led by Dr. Greg Bowman.

Folding@Home has been around for nearly twenty years, and they have a long record of generating valuable biomedical research. During the current COVID-19 pandemic, there has been a surge of donors and attention focused on the work that the FAH team has been doing. On March 8, 2020 I started a SQLFamily team for Folding@Home. Since then, 130 other people (so far) have joined in the effort. That leads to our current topic, T-SQL Tuesday #126 – Folding@Home.

T-SQL Tuesday #126 – Folding@Home

I had previously heard about Folding@Home many years ago, since a number of hardware related websites used to mention it (because they had competitive teams) fairly frequently. On March 7, I downloaded the FAH client software on a few of my machines and started folding. I started the team the next day. To my happy surprise, many other people in the SQL Server community started joining the team, donating their computer resources.

Since then, I have learned quite a bit about FAH. I have also done quite a bit of hardware modification, optimization and upgrades on my small fleet of lab machines. The point of the hardware work for me was to learn how to get more FAH production in the most efficient way possible. I have also tried to spread the word about the project and our team, and to share some of my experience.

Your mission, if you choose to accept it, is to write about what you have been doing as a response to COVID-19. Maybe you have joined the SQLFamily FAH team. Perhaps you have been printing “ear savers” using 3D-printers like Tim Radney. Perhaps you have been donating food or doing some other sort of volunteer work. Whatever you have been doing, now is your chance to talk about it and publicize it!

Personally, I would love to hear about your experience and any lesson’s learned with FAH. What kind of hardware are you using, and have you made any changes? Are you running FAH in the cloud or in some other interesting way? Have you done anything interesting from a technology perspective related to FAH?

T-SQL Tuesday #125 – Unit testing databases – we need to do this!!

Invitation from Hamish Watson.

It’s an awesome way of encouraging blog posts from the community and helping to share that knowledge out.

My topic is about unit testing databases – something that I don’t see enough of when I am working with clients. The good news is that over the years I’ve noticed that more people are speaking and writing about unit testing databases and folding that testing into DevOps processes like Continuous Integration & Continuous Delivery processes (CI/CD).

I hope that this topic drives some conversation both for it (because it protects your code and data…) and against it (it takes too long to write these pesky unit tests…!!).

We’re now delivering Bugs to Production faster than ever!!

This clickbait type heading actually describes what will happen if you embrace DevOps processes without doing any form of testing. Because DevOps is all about accelerating the delivery of software – we want to do more deployments and do them quicker…

..which is why testing is core to DevOps practises.  Specifically testing right throughout the deployment pipeline – that is starting at your laptop and finishing in Production (yes I advocate for testing in production but that’s a whole other blog post..).

What is Unit Testing?

Unit testing is a software testing method by which individual units of source code are tested to determine whether they are fit for use. A unit is the smallest possible testable software component.  A unit is small, so it is easier to design, execute, record, and analyse test results for than larger chunks of code are. Defects revealed by a unit test are easy to locate and relatively easy to repair.

This is opposed to finding the defect in Production – which is harder to triage and is now affecting users – whereas if you find that bug on your laptop via a unit test – it is way easier to remediate and only affects – you.

The ideal qualities of unit tests:

Decisive – the unit test has all info to determine success/failure

Valid – it produces a result that matches the intention of the code written

Complete – it contains all information it needs to run correctly within the test harness

Repeatable – always gives the same results if the test harness and code are same

Isolated – is not affected by other tests run before nor affects the tests run after it

Automated – requires only a start signal in order to run to completion

Benefits of Unit Testing

Below are the benefits of unit tests – this relates to application and databases alike.

Code Quality goes up:

Unit testing improves the quality of the code. It identifies every defect that may have come up before code is sent further for integration testing. Writing tests before actual coding makes you think harder about the problem. It exposes the edge cases and makes you write better code.

Find Issues early:

Issues are found at an early stage. Since unit testing is carried out by testing individual code before integration, issues can be found very early and can be resolved then and there without impacting the other pieces of the code.

Simplifies Integration

Unit testing allows us to refactor code or upgrade things at a later date and make sure everything still works correctly. Unit tests detect changes that may break things and help with maintaining and changing the code base.

The best part about unit testing is that it verifies the accuracy of the each unit of code. Afterward, when we integrate the code units together and run some form of integration testing during the build process we can then verify the individual units of code.

So what about databases?

So now that I have introduced unit testing – is it valuable for implementing with databases? Rather than write my own opinion – I’m going to hand it over to the community to answer this…

🙂

(BTW I think it is valuable – I speak on it regularly and I implement it with clients to safeguard their production databases…)

Lastly – we’re currently in a lockdown in New Zealand – because of the COVID-19 pandemic sweeping the world.

I want to say to all who are reading this:

Be Safe

Be Strong

and please:

Be Kind

If you’re struggling with things – please reach out to your support network (we care about you), I wrote some things that have been helping keep myself:

Take care – we’re in this together and you’re not alone
#sqlfamily

Yip.

T-SQL Tuesday #124 – Using Query Store or Not Let’s Blog

Invitation and summary from Tracy Boggiano.

Ever since Microsoft introduced Query Store I’ve been working with it, back to the CTPs in 2016.  I started presenting on it because it benefited my current company at the time.  I heard there are low adoptions rates and from a couple people implementations problems or just not having time to implement it.  After 3 years of presenting on it and writing a book about it I’m curious as to adoption rate of Query Store, but we won’t be writing about that.

For this T-SQL Tuesday, write about your experience adopting Query Store, maybe something unique you have seen, or a how your configure you databases, or any customization you done around it, or a story about how it saved the day.  Alternately, if you have not implemented yet blog about why if you are using 2016 and above, we know why if aren’t on 2016.  If you are unfortunate to be on below 2016 write about what in Query Store you are looking forward to the most once you are able to implement it.  Basically, anything related to Query Store is in for T-SQL Tuesday, hopefully everyone has read up on it and knows what it can do.