Hello Everyone!

after a Christmas and New Year vacation, we are back with the new release of SMT. This time much smaller, but earlier than usual. 

Major Changes:


   Query Performance collection. After the last update of the query collection in the 1.10 release we identified several issues in the new approach that we needed to fix. A major challenge was the way how we aggregate the query performance records during collection. In the previous version, we used Query_hash as a main identification of specific queries together with its statement offset attributes to define query uniqueness. The following rough example shows such as situation. 

Query Collection Before Change

In the picture, you may see several statement ID records for the same query hash just because the statement offset was different.  This difference could be caused by several factors such as the different position of the code in the SQL module, dynamically constructed ad hoc queries containing specific literals causing command text length to change, and so on. Basically, we reported any change in code position as a distinct query with its own ID. 

This has some drawbacks. One of them was less data to correlate in over time query statistics.  

Query Over Time Reporting


Another issue is collection cost which increased storage requirements due to this new collection logic and multiple similar execution plans and query texts used across different statement IDs. 

Our new solution handles all of the issues reported. New logic still uses Query_hash as the main identification for the query, but we introduce a new attribute called ObjectName for aggregation instead of statement offsets. The ObjectName could have several different values:

  • AD-HOC/Prepared - which describes that the query was not executed in the SQL module, but in batch request. 
  • SQL module name - describing from which procedure, function, or trigger the code has been called.

This new logic allows for more concentrated aggregation, producing more meaningful collection results where users could see data in more aggregated form, saving storage space and improving reporting results. The following example describes this:

.

Query Collection since SMT 1.10.1

As you can see, the results are significant, and we already see a positive impact on SMT database sizes across our customers.  But, we need to mention some other aspects of this change.  

In SMT 1.10.1:

You will no longer be able to track specific execution of the statement based on offset attributes. Data will always be aggregated under the same query_hash and object name (or with text "Ad-Hoc/Prepared" in case of batch requests). You will need to look more at the min/max values of query performance data and not rely only on averages since they will contain more queries executed from different batches and have different performance characteristics.

Since we have SQL module names directly attached to the specific statement IDs, we retain the same reporting performance for the code called from those modules as in SMT 1.10.0. 

What we are planning to add, is the correlation to procedure and trigger reports to take advantage of this new relation between query performance and module performance data on the ObjectName attribute. 

We hope you will enjoy this new concept and in case of any notes or issues/ideas feel free to get in touch with us!


New Query Collection Parameter QC_MINCPU:

We introduced a new collection parameter for the standard query collector to prevent storing non-meaningful queries. The query should have a total CPU cost per collection interval higher than the value specified in this filter to be recorded by SMT. The accepted value for this parameter is in seconds of total CPU consumption of query per check collection interval.

Our default collection interval is 600 seconds and the new default value for QC_MINCPU is 1 second in this release. This means that if your queries are not causing at least more than 1 second of total CPU time in 10 minutes, we will not collect them at all. 

This parameter doesn't affect procedure and trigger collection and also fixed collections based on query hash or texts (QC_HASHFILTER, QC_TEXTFILTER).

It is helpful if your SMT is collecting too many records of non-significant queries and you would like to reduce the overall query collector storage footprint of the SMT. 
 

The new default value for QTEXTNULL parameter:

Let's start with an explanation of what this parameter does. Using this parameter, we control how much SMT keeps all batch texts for everything executed in the last X days. Filtered texts are not affected by this parameter. This is useful when the size of the Collector.Query_text table is too big or has too many records. This could be caused by the low parametrization of statements on the server. 

Since this release, we have enabled the automatic cleanup of batch text from queries which are not been collected in the last 31 days


Minor Changes:

We have made some changes to be backward compatible with SMT installation on SQL 2012 as in good old times. Yes, we still finding customers using old SQL Servers. 

  • Fixed typo in Storage Throughput Detail report showing wrong "Others" text in the legend.
  • Fixed Bug with dual brackets on dbname after collection of last plan. 
  • Improved security for several procedures 

On behalf of the SMT dev and support team I hope you will enjoy the new features and will continue to have a great experience with SMT.

Regards,
Michal

More tips and tricks

Xmas release 1.5.0
by Jiri Dolezalek on 14/12/2021

SMT Christmas release is ready

Read more
SMT 00.5.32 Released!
by Michal Tinthofer on 31/08/2017

After some time, we have finally released big set of changes and fixes for currently known issues. Most visible is new Waiting Task report for operational analysis, added timeline button for Index Usage & recommendation, new functionality to search for an

Read more
The power of SMT data and how we put together Index changes report.
by Michal Tinthofer on 10/09/2021

There are usually multiple ways how to provide a view of a set of data as a report, involving decisions like; keep it easy to read, interpret the relationships between multiple figures in the correct way and to show user as much information as possible at

Read more