Showing posts with label SQL Server. Show all posts
Showing posts with label SQL Server. Show all posts

26 March 2025

💠🏭🗒️Microsoft Fabric: Polaris SQL Pool [Notes]

Disclaimer: This is work in progress intended to consolidate information from various sources and may deviate from them. Please consult the sources for the exact content!

Unfortunately, besides the references papers, there's almost no material that could be used to enhance the understanding of the concepts presented. 

Last updated: 26-Mar-2025

Read and Write Operations in Polaris [2]

[Microsoft Fabric] Polaris SQL Pool

  • {def} distributed SQL query engine that powers Microsoft Fabric's data warehousing capabilities
    • designed to unify data warehousing and big data workloads while separating compute and state for seamless cloud-native operations
    • based on a robust DCP 
      • designed to execute read-only queries in a scalable, dynamic and fault-tolerant way [1]
      • a highly-available micro-service architecture with well-defined responsibilities [2]
        • data and query processing is packaged into units (aka tasks) 
          • can be readily moved across compute nodes and re-started at the task level
        • widely-partitioned data with a flexible distribution model [2]
        • a task-level "workflow-DAG" that is novel in spanning multiple queries [2]
        • a framework for fine-grained monitoring and flexible scheduling of tasks [2]
  • {component} SQL Server Front End (SQL-FE)
    • responsible for 
      • compilation
      • authorization
      • authentication
      • metadata
        • used by the compiler to 
          • {operation} generate the search space (aka MEMO) for incoming queries
          • {operation} bind metadata to data cells
          • leveraged to ensure the durability of the transaction manifests at commit [2]
            • only transactions that successfully commit need to be actively tracked to ensure consistency [2]
            • any manifests and data associated with aborted transactions are systematically garbage-collected from OneLake through specialized system tasks [2]
  • {component} SQL Server Backend (SQL-BE)
    • used to perform write operations on the LST [2]
      • inserting data into a LST creates a set of Parquet files that are then recorded in the transaction manifest [2]
      • a transaction is represented by a single manifest file that is modified concurrently by (one or more) SQL BEs [2]
        • SQL BE leverages the Block Blob API provided by ADLS to coordinate the concurrent writes  [2]
        • each SQL BE instance serializes the information about the actions it performed, either adding a Parquet file or removing it [2]
          • the serialized information is then uploaded as a block to the manifest file
          • uploading the block does not yet make any visible changes to the file [2]
            • each block is identified by a unique ID generated on the writing SQL BE [2]
        • after completion, each SQL BE returns the ID of the block(s) it wrote to the Polaris DCP [2]
          • the block IDs are then aggregated by the Polaris DCP and returned to the SQL FE as the result of the query [2]
      • the SQL FE further aggregates the block IDs and issues a Commit Block operation against storage with the aggregated block IDs [2]
        • at this point, the changes to the file on storage will become effective [2]
      • changes to the manifest file are not visible until the Commit operation on the SQL FE
        • the Polaris DCP can freely restart any part of the operation in case there is a failure in the node topology [2]
      • the IDs of any blocks written by previous attempts are not included in the final list of block IDs and are discarded by storage [2]
    • [read operations] SQL BE is responsible for reconstructing the table snapshot based on the set of manifest files managed in the SQL FE
      • the result is the set of Parquet data files and deletion vectors that represent the snapshot of the table [2]
        • queries over these are processed by the SQL Server query execution engine [2]
        • the reconstructed state is cached in memory and organized in such a way that the table state can be efficiently reconstructed as of any point in time [2]
          • enables the cache to be used by different operations operating on different snapshots of the table [2]
          • enables the cache to be incrementally updated as new transactions commit [2]
  • {feature} supports explicit user transactions
    • can execute multiple statements within the same transaction in a consistent way
      • the manifest file associated with the current transaction captures all the (reconciled) changes performed by the transaction [2]
        • changes performed by prior statements in the current transaction need to be visible to any subsequent statement inside the transaction (but not outside of the transaction) [2]
    • [multi-statement transactions] in addition to the committed set of manifest files, the SQL BE reads the manifest file of the current transaction and then overlays these changes on the committed manifests [1]
    • {write operations} the behavior of the SQL BE depends on the type of the operation.
      • insert operations 
        • only add new data and have no dependency on previous changes [2]
        • the SQL BE can serialize the metadata blocks holding information about the newly created data files just like before [2]
        • the SQL FE, instead of committing only the IDs of the blocks written by the current operation, will instead append them to the list of previously committed blocks
          • ⇐ effectively appends the data to the manifest file [2]
    • {update|delete operations} 
      • handled differently 
        • ⇐ since they can potentially further modify data already modified by a prior statement in the same transaction [2]
          • e.g. an update operation can be followed by another update operation touching the same rows
        • the final transaction manifest should not contain any information about the parts from the first update that were made obsolete by the second update [2]
      • SQL BE leverages the partition assignment from the Polaris DCP to perform a distributed rewrite of the transaction manifest to reconcile the actions of the current operation with the actions recorded by the previous operation [2]
        • the resulting block IDs are sent again to the SQL FE where the manifest file is committed using the (rewritten) block IDs [2]
  • {concept} Distributed Query Processor (DQP)
    • responsible for 
      • distributed query optimization
      • distributed query execution
      • query execution topology management
  • {concept} Workload Management (WLM)
    •  consists of a set of compute servers that are, simply, an abstraction of a host provided by the compute fabric, each with a dedicated set of resources (disk, CPU and memory) [2]
      • each compute server runs two micro-services
        • {service} Execution Service (ES) 
          • responsible for tracking the life span of tasks assigned to a compute container by the DQP [2]
        • {service} SQL Server instance
          • used as the back-bone for execution of the template query for a given task  [2]
            • ⇐ holds a cache on top of local SSDs 
              • in addition to in-memory caching of hot data
            • data can be transferred from one compute server to another
              • via dedicated data channels
              • the data channel is also used by the compute servers to send results to the SQL FE that returns the results to the user [2]
              • the life cycle of a query is tracked via control flow channels from the SQL FE to the DQP, and the DQP to the ES [2]
  • {concept} cell data abstraction
    • the key building block that enables to abstract data stores
      • abstracts DQP from the underlying store [1]
      • any dataset can be mapped to a collection of cells [1]
      • allows distributing query processing over data in diverse formats [1]
      • tailored for vectorized processing when the data is stored in columnar formats [1] 
      • further improves relational query performance
    • 2-dimenstional
      • distributions (data alignment)
      • partitions (data pruning)
    • each cell is self-contained with its own statistics [1]
      • used for both global and local QO [1]
      • cells can be grouped physically in storage [1]
      • queries can selectively reference either cell dimension or even individual cells depending on predicates and type of operations present in the query [1]
    • {concept} distributed query processing (DQP) framework
      • operates at the cell level 
      • agnostic to the details of the data within a cell
        • data extraction from a cell is the responsibility of the (single node) query execution engine, which is primarily SQL Server, and is extensible for new data types [1], [2]
  • {concept} dataset
    • logically abstracted as a collection of cells [1] 
    • can be arbitrarily assigned to compute nodes to achieve parallelism [1]
    • uniformly distributed across a large number of cells 
      • [scale-out processing] each dataset must be distributed across thousands of buckets or subsets of data objects,
      •  such that they can be processed in parallel across nodes
  • {concept} session
    • supports a spectrum of consumption models, ranging from serverless ad-hoc queries to long-standing pools or clusters [1]
    • all data are accessible from any session [1]
      • multiple sessions can access all underlying data concurrently  [1]
  • {concept} Physical Metadata layer
    • new layer introduced in the SQL Server storage engine [2]
See also: Polaris

References:
[1] Josep Aguilar-Saborit et al (2020) POLARIS: The Distributed SQL Engine in Azure Synapse, Proceedings of the VLDB Endowment PVLDB 13(12) [link]
[2] Josep Aguilar-Saborit et al (2024), Extending Polaris to Support Transactions [link]
[3] Gjnana P Duvvuri (2024) Microsoft Fabric Warehouse Deep Dive into Polaris Analytic Engine [link]

Resources:
[R1] Microsoft Learn (2025) Fabric: What's new in Microsoft Fabric? [link]
[R2] Patrick Pichler (2023) Data Warehouse (Polaris) vs. Data Lakehouse (Spark) in Microsoft Fabric [link]
[R3] Tiago Balabuch (2023) Microsoft Fabric Data Warehouse - The Polaris engine [link]

Acronyms:
CPU - Central Processing Unit
DAG - Directed Acyclic Graph
DB - Database
DCP - Distributed Computation Platform 
DQP - Distributed Query Processing 
DWH - Data Warehouses 
ES - Execution Service
LST - Log-Structured Table
SQL BE - SQL Backend
SQL FE - SQL Frontend
SSD - Solid State Disk
WAL - Write-Ahead Log
WLM - Workload Management

22 March 2025

💠🛠️🗒️SQL Server: Statistics Issues [Notes]

Disclaimer: This is work in progress based on notes gathered over the years, intended to consolidate information from the various sources. The content needs yet to be reviewed against the current documentation.

Last updated: 22-Mar-2024

[SQL Server 2005] Statistics

  • {def} objects that contain statistical information about the distribution of values in one or more columns of a table or indexed view [2] [see notes]
  • {issue} inaccurate statistics (aka bad statistics)
    • the primary source for cardinality estimation errors [6]
    • guessed selectivity can lead to a less-than-optimal query plan
    • {cause} missing statistics
      • if an error or other event prevents the query optimizer from creating statistics, the query optimizer creates the query plan without using statistics [22] 
        • the query optimizer marks the statistics as missing and attempts to regenerate the statistics the next time the query is executed [22]
      • [SSMS] indicated as warnings when the execution plan of a query is graphically displayed [22]
    • [SQL Server Profiler] monitoring the Missing Column Statistics event class indicates when statistics are missing [22]
    • troubleshooting
      • {step} verify that AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS are on [22]
      • verify that the database is not read-only [22]
        • the query optimizer cannot save statistics in read-only databases [22]
      • create the missing statistics by using the CREATE STATISTICS statement [22]
    • {cause} use of local variables
    • {cause} non-constant-foldable expressions
    • {cause} complex expressions
    • {cause} ascending key columns
      • can cause inaccurate statistics in tables with frequent INSERTs because new values all lie outside the histogram [3]
    • it is not necessary for statistics to be highly accurate to be effective [18]
  • {issue} stale statistics (aka outdated statistics)
    • statistics are outdated based on the number of modifications of the statistics (and index) columns [6]
    • SQL Server checks to see if the statistics are outdated when it looks up a plan from the cache [6]
      •  it recompiles the query if they are [6]
        • ⇐ triggers a statistics update [6]
    •  [temporary tables] can increase the number of recompilations triggered by outdated statistics [6]
    • [query hint] KEEPFIXED PLAN
      • prevents query recompilation in cases of outdated statistics [6]
        • queries are recompiled only when the schemas of the underlying tables are changed or the recompilation is forced
        • e.g. when a stored procedure is called using the WITH RECOMPILE clause [6]
    • can be identified based on
      • Stats_Date(object_id, stats_id) returns the last update date
      • Rowmodctr from sys.sysindexes
    • has negative impact on performance 
      • almost always outweighs the performance benefits of avoiding automatic statistics updates/creation [2]
        • ⇐ applies also to missing statistics [2]
        • auto update/create stats should be disabled only when thorough testing has shown that there's no other way to achieve the performance or scalability you require [2]
        • without correct and up-to-date statistical information, can be challenging to determine the best execution plan for a particular query [8]
  • {issue} missing statistics
    • {symptom} excessive number of logical reads 
      • often indicates suboptimal execution plans 
        • due to missing indexes and/or suboptimal join strategies selected 
          • because of incorrect cardinality estimation [6]
      • shouldn't be used as the only criteria during optimization 
        • further factors should be considered
          • resource usage, parallelism, related operators in the execution plan [6]
  • {issue} unused statistics 
    • table can end up having man statistics that serve no purpose
      • it makes sense to review and clean up the statistics as part of general maintenance
        • based on the thresholds for the automatic update
  • {issue} skewed statistics 
    • may happen when the sample rate doesn’t reflect the characteristics of values’ distribution
      • {solution} using a higher sample rate 
        • update statistics manually with fullscan
  • {issue} statistics accumulate over time 
    • statistics objects are persistent and will remain in the database forever until explicitly dropped
      • this becomes expensive over time 
    • {solution} drop all auto stats 
      • will place some temporary stress on the system [11]
      • all queries that adheres to a certain pattern that requires a statistics to be created, will wait [11]
      • typically in a matter of minutes most of the missing statistics will be already back in place and the temporary stress will be over [11]
      • {tip} the cleanup can be performed at off-peak hours and ‘warm-up’ the database by capturing and replaying a trace of the most common read only queries [11]
        • create many of the required statistics objects without impacting your ‘real’ workload [11]
      • {downside} statements may deadlock on schema locks with concurrent sessions [11]
  • {issue} overlapping statistics 
    • column statistics created by SQL Server based on queries that get executed can overlap with statistics for indexes created by users on the same column [2] 

Previous Post <<||>> Next Post

References:
[2] Ken Henderson (2003) Guru's Guide to SQL Server Architecture and Internals
[3] Kendal Van Dyke (2010) More On Overlapping Statistics [link
[6] Dmitri Korotkevitch (2016) Pro SQL Server Internals 2nd Ed.
[18] Joe Chang’s Blog (2012) Decoding STATS_STREAM, by Joe Chang
[22] MSDN (2016) Statistics [link]

Resources:

Acronyms:
SSMS - SQL Server Management Studio
  

💠🛠️🗒️SQL Server: Indexed Views [Notes]

Disclaimer: This is work in progress based on notes gathered over the years, intended to consolidate information from the various sources. The content needs yet to be reviewed against the current documentation. 

Last updated: 22-Mar-2024

[SQL Server 2005] Indexed View

  • {def} a materialized view
    • materializes the data from the view queries, storing it in the database in a way similar to tables [6]
      • ⇒ its definition is computed and the resulting data stored just like a table [3]
      • the view is indexed by creating a unique clustered index on it
        • the resulting structure is physically identical to a table with a clustered index
          • ⇐ nonclustered indexes also are supported on this structure
      • can be created on a partitioned table, respectively can be partitioned [1]
    • {benefit} can improve the performance of some types of queries [3]
      • e.g. queries that aggregate many rows
      • ⇐ because the view is stored the same way a table with a clustered index is stored [1]
      • ⇐ not well-suited for underlying data that are frequently updated [3]
      •  more expensive to use and maintain than filtered indexes [5]
    • [query optimizer] 
      • can use it to speed up the query execution [1]
        • the view doesn't have to be referenced in the query for the optimizer to consider it for a substitution [1]
        • {downside} DML query performance can degrade significantly [1]
          • ⇐ in some cases, a query plan can't even be produced [1]
          • when executing UPDATE, DELETE or INSERT on the base table referenced, the indexed views must be updated as well [1]
          • {recommendation} test DML queries before production use [1]
            • analyze the query plan and tune/simplify the DML statemen [1]
      • can use the structure to return results more efficiently to the user
        • contains logic to use this index in either of the cases 
          • the original query text referenced the view explicitly [2]
          • the user submits a query that uses the same components as the view (in any equivalent order) [2]
          • ⇐ the query processor expands indexed views early in the query pipeline and always uses the same matching code for both cases [2]
            • the WITH(NOEXPAND) hint tells the query processor not to expand the view definition [2]
            • also instructs the query processor to perform an index scan of the indexed view rather than expand it into its component parts [5]
            • any extra rows in the indexed view are reported as 8907 errors [5]
            • any missing rows are reported as 8908 errors [5]
      • expose some of the benefits of view materialization while retaining the benefits of global reasoning about query operations [2]
      • expanded (aka in-lined) before optimization begins
        • gives the Query Optimizer opportunities to optimize queries globally [2]
        • makes it difficult for the (query) optimizer to consider plans that perform the view evaluation first, then process the rest of the query [2]
          • arbitrary tree matching is a computationally complex problem, and the feature set of views is too large to perform this operation efficiently [2]
      • cases in which it does not match the view
        • indexed views are inserted into the Memo and evaluated against other plan choices
          • while they are often the best plan choice, this is not always the case [2]
          • the Query Optimizer can detect logical contradictions between the view definition and the query that references the view [2]
        • there are also some cases where the Query Optimizer does not recognize an indexed view even when it would be a good plan choice [2]
          • often, these cases deal with complex interactions between high-level features within the query processor (e.g. computed column matching, the algorithm to explore join orders) [2]
          • consider the WITH (NOEXPAND) hint to force the query processor to pick that indexed view [2]
            •  this usually is enough to get the plan to include the indexed view [2]
        • indexed view alternatives 
          • are generated and stored in the Memo 
          • are compared using costing equations against other possible plans
          • partial matches cost the residual operations as well
            • an indexed-view plan can be generated but not picked when the Query Optimizer considers other plans to have lower costs [2]
        • maintained as part of the update processing for tables on which the view is based
          • this makes sure that the view provides a consistent result if it is selected by the Query Optimizer for any query plan [2]
          • some query operations are incompatible with this design guarantee
            • restrictions are placed on the set of supported constructs in indexed views to make sure that the view can be created, matched, and updated efficiently [2]
        • {operation} updating indexed views
          • the core question behind the restrictions is “Can the query processor compute the necessary changes to the Indexed View clustered and nonclustered indexes without having to recompute the whole indexed view?” [2]
            • if so, the query processor can perform these changes efficiently as part of the maintenance of the base tables that are referenced in the view[2]
              • this property is relatively easy for filters, projections (compute scalar), and inner joins on keys[2]
              • operators that destroy or create data are more difficult to maintain, so often these are restricted from use in indexed views. [2]
        • matching indexed views is supported in cases beyond exact matches of the query text to the view definition [2]
          • it also supports using an indexed view for inexact matches where the definition of the view is broader than the query submitted by the user [2]
            • then applies residual filters, projections (columns in the select list), and even aggregates to use the view as a partial precomputation of the query result [2]
    • {concept} statistics on indexed views
      • normally statistics aren't needed
        • because the substitution of the indexed views into the query plan is considered only after all the statistics for the underlying tables and indexes are attached to the query plan [3]
        • used if the view is directly referenced by the NOEXPAND hint in a FROM clause 
          • an error is generated and the plan is not created if the NOEXPAND hint is used on a view that does not also contain an index [3]
      • can’t be created by using sp_createstats or updated by using sp_updatestats. 
      • auto update and auto create statistics features work for indexed views
        • created manually
          • via CREATE STATISTICS on the indexed view columns
          • via UPDATE STATISTICS to update column or index statistics on indexed views
    • {operation} creating a view
      • requires that the underlying object’s schema can’t change
      • requires WITH SCHEMABINDING option [5]
      • ⇒ must include the two-part names of all referenced tables [5]
      • ⇐ the tables can't be dropped and the columns can't be altetd when participate in a view unless the view is tropped [5]
      • ⇐ an error is raised [5]
      • the user must hold 
        • the CREATE VIEW permission in the database [1]
        • ALTER permission on the schema in which the view is being created [1]
        • if the base table resides within a different schema, the REFERENCES permission on the table is required as a minimum [1]
        • if the user creating the index differs from the users who created the view, for the index creation alone the ALTER permission on the view is required [1]
    • {operation} creating an index on the view[
      • indexes can only be created on views that have the same owner as the referenced table or tables (aka intact ownership chain between the view and the tables) [1]
    • {operation} dropping a view
      • makes all indexes on the view to be dropped  [1]
        • ⇐ all nonclustered indexes and auto-created statistics on the view are dropped when the clustered index is dropped [1]
          • {exception} ser-created statistics on the view are maintained [1]
      • nonclustered indexes can be individually dropped [1]
      • dropping the clustered index on the view 
        • removes the stored result set [1]
        • the optimizer returns to processing the view like a standard view [1]
    • {operation} disable indexes on tables and views
      • when a clustered index on a table is disabled, indexes on views associated with the table are also disabled [1]
    • {option} EXPAND VIEWS
      • allows to prevent the Database Engine from using indexed views [1]
        • if any of the listed options are incorrectly set, this option prevents the optimizer from using the indexes on the views [1]
        • via OPTION (EXPAND VIEWS) hint
    • {recommendation} when using datetime and smalldatetime string literals in indexed views, explicitly convert the literal to the date type by using a deterministic date format style [1]
    • {limitation} AVG is not allowed {workaround} use SUM and COUNT_BIG (5]
    • {limitation} impacted by SET options [1]
      • {restriction} require fixed values for several SET options [1]
      • {recommendation} set the ARITHABORT user option to ON server-wide as soon as the first indexed view or index on a computed column is created in any database on the server [1]
    • {limitation} further requirements apply (see [1])
    • {limitation} aren't supported on top of temporal queries
      • ⇐ queries that use FOR SYSTEM_TIME clause).
    • {scenario}simplifying SQL queries
    • {scenario} abstracting data models from user models
    • {scenario} enforcing user security


References:
[1] Microsoft Learn (2024) SQL Server: Create indexed views [link]
[2] Kalen Delaney et all (2009) Microsoft® SQL Server® 2008 Internals
[3] Microsoft Learn (2024) SQL Server: Views [link]
[4] Microsoft Learn (2024) SQL Server: CREATE INDEX (Transact-SQL) [link]
[5] Kalen Delaney et all (2012) Microsoft® SQL Server® 2012 Internals
[6] Dmitri Korotkevitch (2016) Pro SQL Server Internals 2nd Ed.

Resources:
[R1] Microsoft Learn (2024) SQL Server: Optimize index maintenance to improve query performance and reduce resource consumption [link]

Acronyms:
DML - Data Manipulation Language
QO - Query Optimizer

19 March 2025

💠🛠️🗒️SQL Server: Views [Notes]

Disclaimer: This is work in progress based on notes gathered over the years, intended to consolidate information from the various sources. The content needs yet to be reviewed against the current documentation.

Last updated: 19-Mar-2024

[SQL Server 2005] View (aka virtual table)

  • {def} a database object that encapsulates a SQL statement and that can be used as a virtual table in further SQL statements
    • cannot be executed by itself
      •  it must be used within a query [15]
    • doesn't store any data 
      • except index views
      • data is dynamically produced from the underlying table when the view is used [32]
        • views depend on the underlying tables and act like a filter on the underlying tables [32]
    • used just like regular tables without incurring additional cost
      • unless the view is indexed [25]
    • turning a query into a view
      • remove the ORDER BY clause
      • assure there are no name duplicates
      • assure that each column has a name
    • projected columns
      • columns included in the view 
    • view’s column list 
      • renames every output column just as if every column had those alias names in the SELECT statement
      • a view is more self-documenting if the column names of the view are specified in the SELECT statement and not listed separately in the view [27]
    • {restriction} sorting is not allowed in a view
      •  unless the view includes a TOP predicate 
        • ORDER BY clause serves only to define which rows qualify for the TOP predicate [15]
          • the only way to logically guarantee sorted results is to define the ORDER BY clause in the executing query [15]
        • [SQL Server 2005] had a bug in the Query Optimizer that would enable an ORDER BY in a view using a top 100 percent predicate [15]
          • the behavior was never documented or officially supported [15]
      • OFFSET FETCH clause
    • {restriction} parameters can’t be passed to a view  [100]
      • {alternative} use an inline table-valued function 
    • {restriction} cannot reference a variable inside the SELECT statement [100]
    • {restriction} cannot create a table, whether permanent or temporary
      • ⇒ cannot use the SELECT/INTO syntax in a view
    • {restriction} can reference only permanent tables
      • ⇒  cannot reference a temporary table [100]
    • {benefit} present the correct fields to the user
    • {benefit} enforce security 
        • by specifying 
          • only needed columns
            • projects a predefined set of columns [15]
            • hides sensitive, irrelevant, or confusing columns [15]
            • should be used in parallel with SQL Server–enforced security [15]
          • only needed records
        • by allowing users access to the view without the need to give access to the used tables
          • grant users read permission from only the views, and restrict access to the physical tables [15]
    • {benefit} maintainability
    • {benefit} provides a level of abstraction
      • hides the complexity of the underlying data structures 
      • encapsulates (business)logic
      • denormalize or flatten complex joins 
      • can consolidate data across databases/servers
      • can be used as single version of truth
    • {benefit} allow changing data in the base tables
    • {downside} layers of nested views require needless overhead for views’ understanding
    • {downside} single-purpose views quickly become obsolete and clutter the database [15]
    • {downside} complex views are perceived as having poor performance [15]
    • {best practice} use generic/standard naming conventions
    • {best practice} use aliases for cryptic/recurring column names
    • {best practice} consider only the requested columns
    • {best practice} group specific purpose view under own schema 
    • {best practice} avoid hardcoding values 
    • {best practice} use views for column-level security together with SQL Server–enforced security
    • {best practice} limit views to ad-hoc queries and reports
      • for extensibility and control [15]
      •  performance isn’t the reason [15]
    • {poor practices} create views for single-purpose queries (aka one time requests)
    • {operation} create a view
    • {operation} drop a view
    • {operation} alter a view
    • {operation} select data
    • {operation} update data
      • unless the view is a simple single table view, it’s difficult to update the underlying data through the view [15]
    • {type} inline views
      • exist only during the execution of a query [32]
      • simplify the development of a one-time query [32]
        • allows creating queries in steps
          • enables troubleshooting 
      • can replace inline UDFs
      • alternatives
        • inline UDFs
        • temporary tables
    • {type} indexed views
      • materialize the data, storing the results of the view in a clustered index on disk [15]
      • similar to a covering index 
        • but with greater control 
          • can include data from multiple data sources [15]
          • no need to include the clustered index keys [15]
        • designing an indexed view is more like designing an indexing structure than creating a view [15]
      • can cause deadlock when two or more of the participating tables is updated/inserted/deleted from two or more sessions in parallel such that they block each other causing a deadlock scenario [29]
    • {type} compatibility views
      • allow accessing a subset of the SQL Server 2000 system tables
        • don’t contain any metadata related to features added after
      • views have the same names as many of the system tables in previous version, as well as the same column names
        • ⇒ any code that uses the SQL Server 2000 system tables won’t break [16]
        • there’s no guarantee that will be returned exactly the same results as the ones from the corresponding tables in SQL Server 2000 [16]
      • accessible from any database
      • hidden in the resource database
        • e.g. sysobjects, sysindexes, sysusers, sysdatabases
    • {type} [SQL Server 2015] catalog views
      • general interface to the persisted system metadata
      • built on an inheritance model
        • ⇒  no need to redefine internally sets of attributes common to many objects
      • available over sys schema
        • must be included in object’s reference
      • some of the names are easy to remember because they are similar to the SQL Server 2000 system table names [16]
      • the columns displayed are very different from the columns in the compatibility views
      • some metadata appears only in the master database 
        • keeps track of system-wide data (e.g. databases and logins)
        • other metadata is available in every database (e.g. objects and permissions)
        • metadata appearing only in the msdb database isn’t available through catalog views but is still available in system tables, in the schema dbo (e.g. backup and restore, replication, Database Maintenance Plans, Integration Services, log shipping, and SQL Server Agent)
    • {type} partitioned views 
      • allow the data in a large table to be split into smaller member tables
        • the data is partitioned between the member tables based on ranges of data values in one of the columns [4]
        • the data ranges for each member table are defined in a CHECK constraint specified on the partitioning column [4]
        • a view that uses UNION ALL to combine selects of all the member tables into a single result set is then defined [4]
        • when SELECT statements referencing the view specify a search condition on the partition column, the query optimizer uses the CHECK constraint definitions to determine which member table contains the rows [4]
    • {type} distributed partition views (DPV)
      • local partitioned views
        • a single table is horizontally split into multiple tables, usually all have the same structure [30]
      • cross database partitioned views 
        • tables are split among different databases on the same server instance
      • distributed (across server or instance) partitioned views
        • tables participating in the view reside in different databases which reside ondifferent servers or different instances
    • {type} nested views
      • views referred by other views [15]
      • can lead to an abstraction layer with nested views several layers deep 
        • too difficult to diagnose and maintain [15]
    • {type} updatable view
      • view that allows updating the underlying tables
        • only one table may be updated
        • if the view includes joins, then the UPDATE statement that references the view must change columns in only one table [15]
      • typically not a recommended solution for application design
      • WITH CHECK OPTION causes the WHERE clause of the view to check the data being inserted or updated through the view in addition to the data being retrieved [15]
        • it makes the WHERE clause a two-way restriction [15]
          • ⇒  can protect the data from undesired inserts and updates [15]
        • ⇒  useful when the view should limit inserts and updates with the same restrictions applied to the WHERE clause [15]
        • when CHECK OPTION isn’t use, records inserted in the view that don’t match the WHERE constraints will disappear (aka disappearing rows) [15]
    • {type} non-updatable views
      • views that don’t allow updating the underlying tables
      • {workaround} build an INSTEAD OF trigger that inspects the modified data and then performs a legal UPDATE operation based on that data [15]
    • {type} horizontally positioned views.
      • used s to enforce row-level security with the help of a WITH CHECK option
        • {downside} has a high maintenance cost [15] 
        •  {alternative} row-level security can be designed using user-access tables and stored procedures [15]
    • {type} schema-bound views
      • the SELECT statement must include the schema name for any referenced objects [15]
        • SELECT * (all columns) is not permitted [15]
    • {type} subscription views 
      • a view used to export Master Data Services data to subscribing systems

References:
[4] Microsoft (2013) SQL Server 2000 Documentation
[15] Adam Jorgensen et al (2012) Microsoft® SQL Server® 2012 Bible
[16] Bob Beauchemin et al (2012) Microsoft SQL Server 2012 Internals
[25] Basit A Masood-Al-Farooq et al (2014) SQL Server 2014 Development Essentials: Design, implement, and deliver a successful database solution with Microsoft SQL Server 2014
[30] Kevin Cox (2007) Distributed Partitioned Views / Federated Databases: Lessons Learned
[32] Sikha S Bagui & Richard W Earp (2006) Learning SQL on SQL Server 2005
[100] Itzik Ben-Gan et al (2012) Exam 70-461: Querying Microsoft SQL Server 201

Acronyms:
DPV - Distributed Partition Views
UDF - User-Defined Function

💠🛠️🗒️SQL Server: Stored procedures (Notes)

Disclaimer: This is work in progress based on notes gathered over the years, intended to consolidate information from the various sources. The content needs yet to be reviewed against the current documentation.

Last updated: 19-Mar-2024

[SQL Server 2005] Stored procedure

  • {def} a database object that encapsulates one or more statements and compiled when used
    • is a saved batch
      • whatever a batch can do, a stored procedure can do
    • {characteristic} abstraction layer
      • provides the means of abstracting/decoupling a database [1]
    • {characteristic} performant
      •  the fastest possible code, when well-written
        • keeps the execution of data-centric code close to the data
      • easier to index tune a database with stored procedures
    • {characteristics} usability
      • easier to write, consume and troubleshoot stored procedures
      • less likely to contain data integrity errors
      • easier to unit test, than ad-hoc SQL code
    • {characteristic} secure
      • {best practice} locking down the tables and providing access only through stored procedures is a standard for database development [1]
      • minimizes the risk of sql injection attacks
      • provides an alternative for passing a dataset to SQL Server
    • can have zero or more input parameters
    • can have zero or more output parameters
    • highly dependent on the objects it calls
    • [SQL Server 2008] provides enhanced ways to view these dependencies
      • managed by means of the DDL commands
  • {operation} create stored procedure
    • via CREATE STORED PROCEDURE <schema.name> 
    • CREATE must be the first command in a batch;
    • the termination of the batch ends the creation of the stored procedure
    • when created, its text is saved in a system table
      • like other database objects
      • the text is only stored as definition 
        • ⇐  the text is not stored for the execution of the stored procedure
    • {best practice}
      • never use "sp_" to prefix the name of a stored procedure
      • reserved for system stored procedures
    • {best practice} use a standard prefix for the stored procedure name (e.g. usp, Proc)
      • it helps identify an object as a stored procedure when reviewing and troubleshooting code [15]
    • {best practice} always use a two-part naming convention
      • ensures that the stored procedure is added to the appropriate schema [15]
    • {best practice} use descriptive names
    • {best practice} implement error handling
      • syntax and logic errors should be gracefully handled, with meaningful information sent back to the calling application [15]
  • {operation} alter stored procedure
    • replaces the entire existing stored procedure with new code
    • preferable to dropping and recreating it
      • because the latter method removes any permissions [1]
  • {operation} drop stored procedure
    •  removes it from the database
  • {operation} execute stored procedure
    • via EXECUTE or EXEC
      • can be executed individually also without EXECUTE when the stored procedure is the first line of a batch
  • {concept}  compilation
    • automatic process that takes place the first time the code is executed 
    • {option} WITH ENCRYPTION
      • obfuscates the code in the object 
        • is not to prevent a user from reading the code 
      • the stored procedure text is not directly readable
      • there is no routine to hide the code
        • SQL Server applies a bitwise OR to the code in the object
      • anyone with VIEW DEFINITION authority on an object can see the code though
      • carried from early versions of SQL Server [2]
      • {best practice} there must be a compelling and carefully considered justification for encrypting a stored procedure [15]
        • e.g.  third-party software
  • {type} system stored procedures
    • stored in master database
  • {type} extended stored procedures
    • routines residing in DLLs that function similarly to regular stored procedures [33]
      • usually written in C or C++
    • receive parameters and return results via SQL Server's Open Data Services API [33]
    • reside in the master database [33]
    • run within the SQL Server process space [33]
    • aren't automatically located in the master database [33]
    • don't assume the context of the current database when executed [33]
    • fully qualify the reference to execute an extended procedure from a database other than the master [33]
      • {workaround} wrapping the extended stored procedure into a system stored procedure
        •  can be called from any database without requiring the master prefix
        • technique used with a number of SQL Server's own extended procedures
    • {type} internal stored procedures 
      • system-supplied stored procedures implemented internally by SQL Server [33]
      • have stubs in master..sysobjects
      • are neither true system procedures nor extended procedures [33]
        • listed as extended procedures, but they are actually implemented internally by the server [33]
      • cannot be dropped or replaced with updated DLLs
        • normally this happens when a service pack is applied [33]
      • examples: 
        • sp_executesql
        • sp_xml_preparedocument
        • most of the sp_cursor routines
        • sp_reset_connection
    • {type} user-defined stored procedures
    • {type} remote stored procedures
      • may only be remotely called
        • ⇐  may not be remotely created [15]
      • require that the remote server be a linked server
        • namely, a four-part name reference 
          • via EXECUTE Server.Database.Schema.StoredProcedureName;
      • a distributed query
        • OpenQuery(LinkedServerName, 'EXECUTE Schema.StoredProcedureName');
    • {type} recursive stored procedures 
      • stored procedures that call themselves 
      • perform numeric computations that lend themselves to repetitive evaluation by the same processing steps [33]
      • calls can be nested up to 32 levels deep
    • {type} nested stored procedure
      • calls can be nested up to 32 levels deep
  • {advantage} execution plan retention and reuse
  • {advantage} query auto-parameterization
  • {advantage} allow encapsulation of business rules and policies
  • {advantage} allow application modularization
  • {advantage} allow sharing of application logic between applications
  • {advantage} allow access to database objects that is both secure and uniform
  • {advantage} allow consistent, safe data modification
  • {advantage} allow network bandwidth conservation
  • {advantage} support for automatic execution at system start-up
  • {limitation} cannot be schema bound
    • [SQL Server 2012] {feature} Result Sets 
      • can guarantee the structure of the returned results at run time
  • {myth} a stored procedure provide a performance benefit because the execution plan is cached and stored for reuse
    • [SQL Server 2000] all execution plans are cached, regardless of whether they’re the result of inline T-SQL or a stored procedure call
    • {corollary} all T-SQL must be encapsulated into stored procedures
  • {myth} stored procedures are hard to manage

Previous Post <<||>> Next Post

References:
[1] Paul Nielsen et al (2009 SQL Server 2008 Bible
[2] Tobias Thernström et al (2009) MCTS Exam 70-433 Microsoft SQL Server 2008 – Database Development. Self-Paced Training Kit
[7] Michael Lee & Gentry Bieker (2008) Mastering SQL Server® 2008
[8] Joseph Sack (2008) SQL Server 2008 Transact-SQL Recipes
[15] Adam Jorgensen et al (2012)  Microsoft® SQL Server® 2012 Bible
[26] Patrick LeBlanc (2013) Microsoft SQL Server 2012: Step by Step, Microsoft Press
[33] Ken Henderson (2001) The Guru's Guide to SQL Server™ Stored Procedures, XML, and HTML
[42] Dušan Petkovic (2008) Microsoft® SQL Server™ 2008: A Beginner’s Guide
[51] Michael Lee & Gentry Bieker (2009) Mastering SQL Server® 2008
[64] Robert D Schneider and Darril Gibson (2008) Microsoft® SQL Server® 2008 All-In-One Desk Reference for Dummies
[100] Itzik Ben-Gan et al (2012) Exam 70-461: Querying Microsoft SQL Server 2012

Acronyms:
DDL - Data Definition Language
DLL - Dynamic Link Library

💠🛠️🗒️SQL Server: Statistics [Notes]

Disclaimer: This is work in progress based on notes gathered over the years, intended to consolidate information from the various sources. The content needs yet to be reviewed against the current documentation.

Last updated: 19-Mar-2024

[SQL Server 2005] Statistics

  • {def} objects that contain statistical information about the distribution of values in one or more columns of a table or indexed view [2]
    • used by query optimizer to estimate the cardinality (aka number of rows)  in the query result
    • created on any data type that supports comparison operations [4]
    • metadata maintained about index keys and, optionally, nonindexed column values
      • implemented via statistics objects 
      • can be created over most types 
        • generally data types that support comparisons (such as >, =, and so on) support the creation of statistics [20]
      • the DBA is responsible for keeping the statistics objects up to date in the system [20]
      • {restriction} the combined width of all columns constituting a single statistics set must not be greater than 900 bytes [21]
    • statistics collected
      • time of the last statistics collection (inside STATBLOB) [21]
      • number of rows in the table or index (rows column in SYSINDEXES) [21]
      • number of pages occupied by the table or index (dpages column in SYSINDEXES) [21]
      • number of rows used to produce the histogram and density information (inside STATBLOB, described below) [21]
      • average key length (inside STATBLOB) [21]
      • histogram 
        • measures the frequency of occurrence for each distinct value in a data set [31]
        • computed by the query optimizer on the column values in the first key column of the statistics object, selecting the column values by statistically sampling the rows or by performing a full scan of all rows in the table or view [31]
          • when created from a sampled set of rows, the stored totals for number of rows and number of distinct values are estimates and do not need to be whole integers [31]
          • it sorts the column values, computes the number of values that match each distinct column value and then aggregates the column values into a maximum of 200 contiguous histogram steps [31]
            • each step includes a range of column values followed by an upper bound column value [31]
            • the range includes all possible column values between boundary values, excluding the boundary values themselves [21]
            • the lowest of the sorted column values is the upper boundary value for the first histogram step [31]
        • defines the histogram steps according to their statistical significance [31] 
          • uses a maximum difference algorithm to minimize the number of steps in the histogram while maximizing the difference between the boundary values
            • the maximum number of steps is 200 [31]
            • the number of histogram steps can be fewer than the number of distinct values, even for columns with fewer than 200 boundary points [31]
    • stored only for the first column of a composite index
      • the most selective columns in a multicolumn index should be selected first [2]
        • the histogram will be more useful to the optimizer [2]
      • single column histogram [21]
      • splitting up composite indexes into multiple single-column indexes is sometimes advisable [2]
    • used to estimate the selectivity of nonequality selection predicates, joins, and other operators  [3]
    • contains a sampling of up to 200 values for the index's first key column
      • the values in a given column are sorted in ordered sequence
        • divided into up to 199 intervals 
          • so that the most statistically significant information is captured
          • in general, of nonequal size
  • {type} table-level statistics
    • statistics maintained on each table
    • include: 
      • number of rows in the table [8]
      • number of pages used by the table [8]
      • number of modifications made to the keys of the table since the last update to the statistics [8]
  • {type} index statistics 
    • created with the indexes
    • updated with fullscan when indexes are rebuild the index [13]
      • {exception} [SQL Server 2012] for partitioned indexes when the number of partitions >1000 it uses default sampling [13]
  • {type} column statistics (aka non-index statistics)
    • statistics on non-indexed columns
    • determine the likelihood that a given value might occur in a column [2]
      • gives the optimizer valuable information in determining how best to service a query [2]
        • allows the optimizer to estimate the number of rows that will qualify from a given table involved in a join [2]
          • allows to more accurately select join order [2]
      • if automatic creation or updating of statistics is disabled, the Query Optimizer returns a warning in the showplan output when compiling a query where it thinks it needs this information [20]
    • used by optimizer to provide histogram-type information for the other columns in a multicolumn index [2]
      • the more information the optimizer has about data, the better [2]
      • queries asking for data that is outside the bounds of the histogram will estimate 1 row and typically end up with a bad, serial plan [15]
    • automatic created 
      • when nonindexed column is queried while AUTO_CREATE_STATISTICS is enabled for the database
    • aren’t updated when an index is reorganized or rebuild [10]
      • {exception} [SQL Server 2005] indexes rebuilt with DBCC dbreindex  [10]
        • updates index statistics as well column statistics [10]
        • this feature will be removed in a future version of Microsoft SQL Server
  • {type} [SQL Server 2008] filtered statistics
    • useful when dealing with partitioned data or data that is wildly skewed due to wide ranging data or lots of nulls
    • scope defined with the help of a statistic filter
      • a condition that is evaluated to determine whether a row must be part of the filtered statistics
      • the predicate appears in the WHERE clause of the CREATE STATISTICS or CREATE INDEX statements (in the case when statistics are automatically created as a side effect of creating an index)
  • {type} custom statistics
    • {limitation}[SQL Server] not supported 
  • {type} temporary statistics
    • when statistics on a read-only database or read-only snapshot are missing or stale, the Database Engine creates and maintains temporary statistics in tempdb  [22]
      • a SQL Server restart causes all temporary statistics to disappear [22]
      • the statistics name is appended with the suffix _readonly_database_statistic 
        • to differentiate the temporary statistics from the permanent statistics [22]
        • reserved for statistics generated by SQL Server [2] 
        • scripts for the temporary statistics can be created and reproduced on a read-write database [22]
          • when scripted, SSMS changes the suffix of the statistics name from _readonly_database_statistic to _readonly_database_statistic_scripted [22]
    • {restriction} can be created and updated only by SQL Server [22]
      • can be deleted and monitored [22]
  • {operation} create statistics
    • auto-create (aka auto-create statistics)
      • on by default 
        • samples the rows by default 
          • {exception} when statistics are created as a by-product of index creation, then a full scan is used [3]
      • {exception} statistics may not be created
        • for tables where the cost of the plan execution would be lower than the statistics creation itself [21]
        • when the server is too busy [21]
          • e.g. too many outstanding compilations in progress [21]
    • manually 
      • via CREATE STATISTICS 
        • only generates the statistics for a given column or combination of columns [21]
        • {recommendation} keep the AUTO_CREATE_STATISTICS option on so that the query optimizer continues to routinely create single-column statistics for query predicate columns [22]
  • {operation} update statistics
    • covered by the same SQL Server Profiler event as statistics creation
    • ensures that queries compile with up-to-date statistics [22]
    • {recommendation} statistics should not be updated too frequently [22]
      • because there is a performance tradeoff between improving query plans and the time it takes to recompile queries [22]
        • tradeoffs depend from application to application [22]
    • triggered by the executions of either commands:
      • CREATE INDEX ... WITH DROP EXISTING
        • scans the whole data set
          • the index statistics are initially created without sampling
        • allows to set the sample size in the WITH clause either by specifying
          • FULLSCAN 
          • percentage of data to scan
          • interpreted as an approximation 
      • sp_createstats stored procedure
      • sp_updatestats stored procedure
      • DBCC DBREINDEX
        • rebuilds one or more indexes for a table in the specified database [21]
        • DBCC INDEXDEFRAG or ALTER INDEX REORGANIZE operations don’t update the statistics [22]
    • undocumented options
      • meant for testing and debugging purposes
        •  should never be used on production systems [29]
      • STATS_STREAM = stats_stream 
      • ROWCOUNT = numeric_constant 
        • incl PAGECOUNT, alter the internal metadata of the specified table or index by overriding the counters containing the row and page counts of the object [29]
          • read by the Query Optimizer when processing queries that access the table and/or index in question [29]
            •  cheat the Optimizer into thinking that a table or index is extremely large [29]
              • the content of the actual tables and indexes will remain intact [29]
      • PAGECOUNT = numeric contant 
    • auto-update (aka auto statistics update)
      • on by default 
        • samples the rows by default 
          • always performed by sampling the index or table [21]
      • triggered by either
        • query optimization 
        • execution of a compiled plan
      • involves only a subset of the columns referred to in the query 
      • occurs
        • before query compilation if AUTO_UPDATE_STATISTCS_ASYNC is OFF
        • asynchronously if AUTO_UPDATE_STATISTCS_ASYNC is ON
          • the query that triggered the update proceeds using the old statistics
            • provides more predictable query response time for some workloads [3]
              • particularly the workload with short running queries and very large tables [3]
      • it tracks changes to columns in the statistics
        • {limitation} it doesn’t track changes to columns in the predicate [3]
          • {recommendation} if there are many changes to the columns used in predicates of filtered statistics, consider using manual updates to keep up with the changes [3]
      • enable/disable statistics update 
        • database level
          • ALTER DATABASE dbname SET AUTO_UPDATE_STATISTICS OFF
          • {limitation} it’s not possible to override the database setting of OFF for auto update statistics by setting it ON at the statistics object level [3]
        • table level
          • NORECOMPUTE option of the UPDATE STATISTICS command 
          • CREATE STATISTICS command
          • sp_autostats
        • index
          • sp_autostats
        • statistics object
        • sp_autostats
      • [SQL Server 2005] asynchronous statistics update
        • allows the statistics update operation to be performed on a background thread in a different transaction context
          • avoids the repeating rollback issue [20]
          • the original query continues and uses out-of-date statistical information to compile the query and return it to be executed [20]
          • when the statistics are updated, plans based on those statistics objects are invalidated and are recompiled on their next use [20]
        • {command}
          • ALTER DATABASE... SET AUTO_UPDATE_STATISTICS_ASYNC {ON | OFF}
    • manually
      • {recommendation} don’t update statistics after index defragmentation 
        • update eventually only the column statistics
      • [system tables] it might be needed to update statistics also on system tables when many objects were created
    • triggers a recompilation of the queries [22]
      • {exception} when a plan is trivial [1]
        • won’t be generated a better or different plan [1]
      • {exception} when the plan is non-trivial but no row modifications since last statistics update [1]
        • no insert, delete or update since last statistics update [1]
        • manually free procedure cache in case is needed to generate a new plan [1]
      • schema change 
        • dropping an index defined on a table or an indexed view 
          • only if the index is used by the query plan in question
        • [SQL Server 2000] manually updating or dropping a statistic (not creating!) on a table will cause a recompilation of any query plans that use that table
          • the recompilation happens the next time the query plan in question begins execution
        • [SQL Server 2005] dropping a statistic (not creating or updating!) defined on a table will cause a correctness-related recompilation of any query plans that use that table
          • the recompilations happen the next time the query plan in question begins execution
        • updating a statistic (both manual and auto-update) will cause an optimality-related (data related) recompilation of any query plans that uses this statistic 
      • update scenarios 
        • {scenario} query execution times are slow [22]
          • troubleshooting: ensure that queries have up-to-date statistics before performing additional analysis [22] 
        • {scenario} insert operations occur on ascending or descending key columns [22]
          • statistics on ascending or descending key columns (e.g. IDENTITY or real-time timestamp) might require more frequent statistics updates than the query optimizer performs [22]
          • if statistics are not up-to-date and queries select from the most recently added rows, the current statistics will not have cardinality estimates for these new values [22] 
            • inaccurate cardinality estimates and slow query performance [22]
        • after maintenance operations [22]
          • {recommendation} consider updating statistics after performing maintenance procedures that change the distribution of data (e.g. truncating a table, bulk inserts) [22]
            • this can avoid future delays in query processing while queries wait for automatic statistics updates [22]
            • rebuilding, defragmenting, or reorganizing an index do not change the distribution of data [22]
              • no need to update statistics after performing ALTER INDEX REBUILD, DBCC REINDEX, DBCC INDEXDEFRAG, or ALTER INDEX REORGANIZE operations [22]
  • {operation} dropping statistics
    • via DROP STATISTICS command
    • {limitation} it’s not possible to drop statistics that are a byproduct of an index [21]
      • such statistics are removed only when the index is dropped [21]
    • aging 
      • [SQL Server 2000] ages the automatically created statistics (only those that are not a byproduct of the index creation)
      • after several automatic updates the column statistics are dropped rather than updated [21]
        • if they are needed in the future, they may be created again [21]
        • there is no substantial cost difference between statistics that are updated and created [21]
      • does not affect user-created statistics  [21]
  • {operation} statistics import)
    • performed by running generated scripts (see statistics export )
  • {operation}export (aka statistics export)
    • performed via “Generate Scripts” task
      • WITH STATS_STREAM
        • value can be generated by using DBCC SHOW_STATISTICS WITH STATS_STREAM
  • {operation} save/restore statistics
    • not supported
  • {operation} monitoring
    • via dbcc show_statistics
    • via SQL Server query profiler 
  • {option} AUTO_CREATE_STATISTICS
    • when enabled the query optimizer creates statistics on individual columns in the query predicate, as necessary, to improve cardinality estimates for the query plan [22]
      • these single-column statistics are created on columns that do not already have a histogram in an existing statistics object [22]
      • it applies strictly to single-column statistics for the full table [22]
      • statistics name starts with _WA
    • does not determine whether statistics get created for indexes [22]
    • does not generate filtered statistics [22]
      • call: SELECT DATABASEPROPERTY(<database_name>','IsAutoCreateStatistics') 
      • call: select is_auto_create_stats_on from sys.databases where name = '<database_name'
  • {option}AUTO_UPDATE_STATISTICS option
    • when enabled the query optimizer determines when statistics might be out-of-date and then updates them when they are used by a query [22]
      • statistics become out-of-date after insert, update, delete, or merge operations change the data distribution in the table or indexed view [22]
      • determined by counting the number of data modifications since the last statistics update and comparing the number of modifications to a threshold [22]
        • the threshold is based on the number of rows in the table or indexed view [22]
      • the query optimizer checks for out-of-date statistics 
        • before compiling a query 
          • the query optimizer uses the columns, tables, and indexed views in the query predicate to determine which statistics might be out-of-date [22]
        • before executing a cached query plan
          • the Database Engine verifies that the query plan references up-to-date statistics [22]
    • applies to 
      • statistics objects created for indexes, single-columns in query predicates [22]
      • statistics created with the CREATE STATISTICS statement [22]
      • filtered statistics [22]
    • call: SELECT DATABASEPROPERTY(<database_name>','IsAutoUpdateStatistics') 
    • call: select is_auto_update_stats_on from sys.databases where name = '<database_name'
  • {option} AUTO_UPDATE_STATISTICS_ASYNC
    • determines whether the query optimizer uses synchronous or asynchronous statistics updates 
    • {default} off
      • the query optimizer updates statistics synchronously
        • queries always compile and execute with up-to-date statistics
        • when statistics are out-of-date, the query optimizer waits for updated statistics before compiling and executing the query [22]
        • {recommendation} use when performing operations that change the distribution of data [22]
          • e.g. truncating a table or performing a bulk update of a large percentage of the rows [22]
          •  if the statistics are not updated after completing the operation, using synchronous statistics will ensure statistics are up-to-date before executing queries on the changed data [22]
    • applies to statistics objects created for  [22]
      • indexes
      • single columns in query predicates
      • statistics created with the CREATE STATISTICS statement
    • asynchronous statistics updates
      • queries compile with existing statistics even if the existing statistics are out-of-date
        • the query optimizer could choose a suboptimal query plan if statistics are out-of-date when the query compiles [22]
        • queries that compile after the asynchronous updates have completed will benefit from using the updated statistics [22]
      • {warning} it will do nothing if the “Auto Update Statistics” database option isn’t enabled [30]
      • {recommendation} scenarios
        •  to achieve more predictable query response times
        • an application frequently executes the same query, similar queries, or similar cached query plans
          • the query response times might be more predictable with asynchronous statistics updates than with synchronous statistics updates because the query optimizer can execute incoming queries without waiting for up-to-date statistics [22]
            • this avoids delaying some queries and not others [22]
        • an application experienced client request time outs caused by one or more queries waiting for updated statistics [22]
          • in some cases waiting for synchronous statistics could cause applications with aggressive time outs to fail [22]
      • call: select is_auto_update_stats_async_on from sys.databases where name = '<database_name'
  • [SQL Server 2014] Auto Create Incremental Statistics
    • when active (ON), the statistics created are per partition statistics [22]
    • {default} when disabled (OFF), the statistics tree is dropped and SQL Server re-computes the statistics [22]
    • this setting overrides the database level INCREMENTAL property [22]
  • [partitions] when new partitions are added to a large table, statistics should be updated to include the new partitions [22]
    • however the time required to scan the entire table (FULLSCAN or SAMPLE option) might be quite long [22]
    • scanning the entire table isn't necessary because only the statistics on the new partitions might be needed [22]
    • the incremental option creates and stores statistics on a per partition basis, and when updated, only refreshes statistics on those partitions that need new statistics [22]
    • if per partition statistics are not supported the option is ignored and a warning is generated [22]
    • not supported for several statistics types
      • statistics created with indexes that are not partition-aligned with the base table [22]
      • statistics created on AlwaysOn readable secondary databases [22]
      • statistics created on read-only databases [22]
      • statistics created on filtered indexes [22]
      • statistics created on views [22]
      • statistics created on internal tables [22]
      • statistics created with spatial indexes or XML indexes [22]

References:
[1] Microsoft Learn (2025) SQL Server: Statistics [link]
[2] Microsoft Learn (2025) SQL Server: Configure auto statistics [link]
[3] Microsoft Learn (2009) SQL Server: Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 [link]
[4] Dmitri Korotkevitch (2016) Pro SQL Server Internals 2nd Ed.
[8] Microsoft (2014) Statistical maintenance functionality (autostats) in SQL Server  (kb 195565)
[10] CSS SQL Server Engineers (2015) Does rebuild index update statistics? [link]
[16] Thomas Kejser's Database Blog (2011) The Ascending Key Problem in Fact Tables – Part one: Pain! by Thomas Kejser
[20] Kalen Delaney et al (2013) Microsoft SQL Server 2012 Internals, 2013
[22] MSDN (2016) Statistics [link]
[29] Tips, Tricks, and Advice from the SQL Server Query Optimization Team (2006) UPDATE STATISTICS undocumented options [link]
[31] Microsoft Learn (2025) SQL Server: sys.dm_db_stats_histogram (Transact-SQL) [link]
Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 25 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.