Contact Us

Sometimes is nice to have a tool/report which could allow you to see how much your backup storage is degraded over time. Especially by fragmentation and auto growth/shrink operations. But it often requires extensive spending of administrators’ time to setup baseline monitoring, collecting data and also analyzing them. I founded one useful way how to save your time.

It also has some prerequisites. I assuming you are not cleaning up your backup history table (as usual :) ) and backing up your databases to storage which hold some database files. So it is mainly useful in smaller environments with smaller storage where data files are sharing same DAS or centralized SAN solution with shared raidgroup.


Then you can use my script to extract that information: 
StorageThr.sql


As a result you will get a list of values with a throughput_in_MB_per_min column per every backup made. Then it can be exported to the reports or graphs like this one:

More tips and tricks

More than minor amount of changes released.
by Jiri Dolezalek on 24/09/2022

New version has been released and more features than we planned initially made it in.

Read more
Why is using proper ANSI settings important
by Jiri Dolezalek on 20/05/2021

You might have been wondering what all those ANSI settings are and how they can affect you work.

Read more
Issues with cluster installation
by Michal Tinthofer on 15/06/2012

Recently i have observed some problems with cluster installation in one of my clients. In server logs were those messages:

Read more