Sometimes is nice to have a tool/report which could allow you to see how much your backup storage is degraded over time. Especially by fragmentation and auto growth/shrink operations. But it often requires extensive spending of administrators’ time to setup baseline monitoring, collecting data and also analyzing them. I founded one useful way how to save your time.

It also has some prerequisites. I assuming you are not cleaning up your backup history table (as usual :) ) and backing up your databases to storage which hold some database files. So it is mainly useful in smaller environments with smaller storage where data files are sharing same DAS or centralized SAN solution with shared raidgroup.


Then you can use my script to extract that information: 
StorageThr.sql


As a result you will get a list of values with a throughput_in_MB_per_min column per every backup made. Then it can be exported to the reports or graphs like this one:

More tips and tricks

Why is using proper ANSI settings important
by Jiri Dolezalek on 20/05/2021

You might have been wondering what all those ANSI settings are and how they can affect you work.

Read more
SMT Waits
by Michal Tinthofer on 25/11/2021

SMT Waits reports – handy overview of what is going on at the server

Read more
Archiving strategy for DWH
by Michal Tinthofer on 22/04/2021

Recently, we have got a case where our customer requested to implement archiving strategy for their DWH. We wanted to share with you how we approached this and what was the final output.

Read more