Showing posts with label table partitions. Show all posts
Showing posts with label table partitions. Show all posts

13 February 2025

🧊💠🗒️Data Warehousing: Table Partitioning in SQL Server [Notes]

Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes.

Last updated: 13-Feb-2025

[Data Warehousing] Table Partitioning

  • {defthe spreading of data across multiple tables based on a set of rules to balance large amounts of data across disks or nodes
    • data is distributed based on a function that defines a range of values for each partition [2] 
      • the table is partitioned by applying the partition scheme to the values in a specified column [2]
  • {operation} partition creation
    • [large partitioned table]
      • should be created two auxiliary nonindexed empty tables with the same structure, including constraints and data compression options [4]
        • first table:  create a check constraint that guarantees that all data from the table fits exactly with one empty partition of the fact table
          • the constraint must be created on the partitioning column [4]
          • a columnstore index can be created on the fact table, as long as it is aligned with the table [4]
          • after truncation of <table 2> the <table 1> is prepared to accept the next partition from your fact table for the next minimally logged deletion [4]
        • second table: for minimally logged deletions of large portions of data, a partition from the fact table can be switched to the empty table version without the check constraint [4]
          • then the table can be truncated
      • for minimally logged inserts, new data to the second auxiliary table should be bulk inserted in the auxiliary that has the check constraint [4]
        • INSERT operation can be minimally logged because the table is empty [4]
        • create a columnstore index on this auxiliary table, using the same structure as the columnstore index on your fact table [4]
        • switch data from this auxiliary table to a partition of your fact table [4]
        • drop the columnstore index on the auxiliary table, and change the check constraint to guarantee that all of the data for the next load can be switched to the next empty partition of the fact table [4]
        • the second auxiliary table is prepared for new bulk loads again [4]
  • {operation} [Query Optimizer] partition elimination 
    • process in which SQL Server accesses only those partitions needed to satisfy query filters [4]
  • {operation} partition switching 
    • {definition} process that switches a block of data from one table or partition to another table or partition [4]
    • types of switches
      • reassign all data from a nonpartitioned table to an empty existing partition of a partitioned table [4]
      • switch a partition of one partitioned table to a partition of another partitioned table [4]
      • reassign all data from a partition of a partitioned table to an existing empty nonpartitioned table [4]
  • {benefit} improves query performance [1]
    • by partitioning a table across filegroups [1]
      • specific ranges of data can be placed on different disk spindles [1]
        • can improve I/O performance [1]
          • ⇐ the disk storage is already configured as a RAID 10 or RAID 5 array [1]
            • ⇒ this usually has little benefit [1]
      • using a mix of fast solid state storage for recent, frequently accessed data, and mechanical disks for older, less queried rows [1]
        • use partitioning to balance disk performance against storage costs [1]
      • biggest performance gain from partitioning in a data warehouse is realized when queries return a range of rows that are filtered on the partitioning key [1]
        • the query optimizer can eliminate partitions that are not within the filter range [1]
          • dramatically reduce the number of rows that need to be read [1]
    • reduces contention [3]
      • can reduce the number of rows included in a table scan [3]
  • {benefit} more granular manageability [1]
    • some maintenance operations can be performed at partition level instead of on the whole table [1]
      • e.g. indexes can be created and rebuilt on a per-partition basis [1]
      • e.g. compression can be applied to individual partitions [1]
      • e.g. by mapping partitions to filegroups, partitions can be backed up and restored independently [1]
        • enables to back up older data once and then configure the backed up partitions as read-only [1]
        • future backups can be limited to the partitions that contain new or updated data [1]
  • {benefit} improved data load performance
    • enables loading many rows very quickly by switching a staging table with a partition
      • can dramatically reduce the time taken by ETL data loads [1]
        • with the right planning, it can be achieved with minimal requirements to drop or rebuild indexes [1]
  • {best practice} partition large fact tables
    • tables of around 50 GB or more
    • ⇐ in general, fact tables benefit from partitioning more than dimension tables [1]
  • {best practice} partition on an incrementing date key [1]
    • assures that the most recent data are in the last partition and the earliest data are in the first partition [1]
  • {best practice} design the partition scheme for ETL and manageability [1]
    • the query performance gains realized by partitioning are small compared to the manageability and data load performance benefits [1]
      • ideally partitions should reflect the ETL load frequency
        • because this simplifies the load process [1]
        • merge partitions periodically to reduce the overall number of partitions (for example, at the start of each year [1]
      • could merge the monthly partitions for the previous year into a single partition for the whole year [1]
  • {best practice} maintain an empty partition at the start and end of the table [1]
    • simplifies the loading of new rows [1]
    • when new records need to be added, split the empty partition 
      • ⇐ to create two empty partitions)
    • then switch the staged data with the first empty partition [1]
      • ⇐ loads the data into the table and leaves the second empty partition you created at the end of the table, ready for the next load [1]
    • a similar technique can be used to archive or delete obsolete data at the beginning of the table [1]
  • {best practice} chose the proper granularity
    • it should be aligned to the business requirements [2]
  • {best practice} create at least one filegroup in addition to the primary one
    • set it as the default filegroup
      • data tables are thus separated from system tables [2]
    • creating dedicated filegroups for extremely large fact tables [2]
      • place the fact tables on their own logical disks [2]
    • use a file and a filegroup separate from the fact and dimension tables [2]
      • {exception} staging tables that will be switched with partitions to perform fast loads [2]
        • staging tables must be created on the same filegroup as the partition with which they will be switched [2]
  • {def} partition scheme 
    • a scheme that maps partitions to filegroups
  • {def} partition function 
    • object that maps rows to partitions by using values from specific columns (aka  partitioning columns)
    • performs logical mapping
  • {def} aligned index 
    • index built on the same partition scheme as its base table [4]
      • if all indexes are aligned with their base table, switching a partition is a metadata operation only [4]
        • ⇒ it’s very fast [4]
Previous Post <<||>> Next Post

References:
[1] 20467A - Designing Business Intelligence Solutions with Microsoft SQL Server 2012
[2] 20463C - Implementing a Data Warehouse with Microsoft SQL Server
[3] 10777A - Implementing a Data Warehouse with Microsoft SQL Server 2012
[4] Dejan Sarka et al (2012) Exam 70-463: Implementing a Data Warehouse with Microsoft SQL Server 2012 (Training Kit)
[5] Microsoft Learn (2009) How to Implement an Automatic Sliding Window in a Partitioned Table on SQL Server 2005 [link]

28 February 2017

🧊Data Warehousing: Data Load Optimization (Part I: A Success Story)

Data Warehousing
Data Warehousing Series

Introduction

This topic has been waiting in the queue for almost two years already - since I finished optimizing an already existing relational data warehouse within a SQL Server 2012 Enterprise Edition environment. Through various simple techniques I managed then to reduce the running time for the load process by more than 65%, from 9 to 3 hours. It’s a considerable performance gain, considering that I didn’t have to refactor any business logic implemented in queries.

The ETL (Extract, Transform, Load) solution was making use of SSIS (SQL Server Integration Services) packages to load data sequentially from several sources into staging tables, and from stating further into base tables. Each package was responsible for deleting the data from the staging tables via TRUNCATE, extracting the data 1:1 from the source into the staging tables, then loading the data 1:1 from the staging table to base tables. It’s the simplest and a relatively effective ETL design I also used with small alterations for data warehouse solutions. For months the data load worked smoothly, until data growth and eventually other problems increased the loading time from 5 to 9 hours.

Using TABLOCK Hint

Using SSIS to bulk load data into SQL Server provides an optimum of performance and flexibility. Within a Data Flow, when “Table Lock” property on the destination is checked, it implies that the insert records are minimally logged, speeding up the load by a factor of two. The TABLOCK hint can be used also for other insert operations performed outside of SSIS packages. At least in this case the movement of data from staging into base tables was performed in plain T-SQL, outside of SSIS packages. Also further data processing had benefitted from this change. Only this optimization step alone provided 30-40% performance gain.

Drop/Recreating the Indexes on Big Tables

As the base tables were having several indexes each, it proved beneficial to drop the indexes for the big tables (e.g. with more than 1000000 records) before loading the data into the base tables, and recreate the indexes afterwards. This was done within SSIS, and provided an additional 20-30% performance gain from the previous step.

Consolidating the Indexes

Adding missing indexes, removing or consolidating (overlapping) indexes are typical index maintenance tasks, apparently occasionally ignored. It doesn’t always bring much performance as compared with the previous methods, though dropping and consolidating some indexes proved to be beneficial as fewer data were maintained. Data processing logic benefited from the creation of new indexes as well.

Running Packages in Parallel

As the packages were run sequentially (one package at a time), the data load was hardly taking advantage of the processing power available on the server. Even if queries could use parallelism, the benefit was minimal. Enabling packages run in parallel added additional performance gain, however this minimized the availability of processing resources for other tasks. When the data load is performed overnight, this causes minimal overhead, however it should be avoided when the data are loading to business hours.

Using Nonclustered Indexes

In my analysis I found out that many tables, especially the ones storing prepared data, were lacking a clustered index, even if further indexes were built on them. I remember that years back there was a (false) myth that fact and/or dimension tables don’t need clustered indexes in SQL Server. Of course clustered indexes have downsides (e.g. fragmentation, excessive key-lookups) though their benefits exceed by far the downsides. Besides missing clustered index, there were cases in which the tables would have benefited from having a narrow clustered index, instead of a multicolumn wide clustered index. Upon case also such cases were addressed.

Removing the Staging Tables

Given the fact that the source and target systems are in the same virtual environment, and the data are loaded 1:1 between the various layers, without further transformations and conversions, one could load the data directly into the base tables. After some tests I came to the conclusion that the load from source tables into the staging table, and the load from staging table into base table (with TABLOCK hint) were taking almost the same amount of time. This means that the base tables will be for the same amount of the time unavailable, if the data were loaded from the sources directly into the base tables. Therefore one could in theory remove the staging tables from the architecture. Frankly, one should think twice when doing such a change, as there can be further implications in time. Even if today the data are imported 1:1, in the future this could change.

Reducing the Data Volume

Reducing the data volume was identified as a possible further technique to reduce the amount of time needed for data loading. A data warehouse is built based on a set of requirements and presumptions that change over time. It can happen for example that even if the reports need only 1-2 years’ worth of data, the data load considers a much bigger timeframe. Some systems can have up to 5-10 years’ worth of data. Loading all data without a specific requirement leads to waste of resources and bigger load times. Limiting the transactional data to a given timeframe can make a considerable difference. Additionally, there are historical data that have the potential to be archived.

There are also tables for which a weekly or monthly refresh would suffice. Some tables or even data sources can become obsolete, however they continue to be loaded in the data warehouse. Such cases occur seldom, though they occur. Also some unused or redundant column could have been removed from the packages.

Further Thoughts

There are further techniques to optimize the data load within a data warehouse like partitioning large tables, using columnstore indexes or optimizing the storage, however my target was to provide maximum sufficient performance gain with minimum of effort and design changes. Therefore I stopped when I considered that the amount of effort is considerable higher than the performance gain.

Further Reading:
[1] TechNet (2009) The Data Loading Performance Guide, by Thomas Kejser, Peter Carlin & Stuart Ozer (link)
[2] MSDN (2010) Best Practices for Data Warehousing with SQL Server 2008 R2, by Mark Whitehorn, Keith Burns & Eric N Hanson (link)
[3] MSDN (2012) Whitepaper: Fast Track Data Warehouse Reference Guide for SQL Server 2012, by Eric Kraemer, Mike Bassett, Eric Lemoine & Dave Withers (link)
[4] MSDN (2008) Best Practices for Data Warehousing with SQL Server 2008, by Mark Whitehorn & Keith Burns (link)
[5] TechNet (2005) Strategies for Partitioning Relational Data Warehouses in Microsoft SQL Server, by Gandhi Swaminathan (link)
[6] SQL Server Customer Advisory Team (2013) Top 10 Best Practices for Building a Large Scale Relational Data Warehouse (link)

08 February 2009

🛢DBMS: Partitioning (Definitions)

"To divide a table into logical subsets based on characteristics of the data. Partitioning is used to improve application performance or reduce the potential for conflicts in multisite update replication." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"Physically separating data into areas that are more easily maintained or accessed. Data partitioning increases performance and aids in maintenance processes." (Microsoft Corporation, "Microsoft SQL Server 7.0 Data Warehouse Training Kit", 2000)

"The process of splitting a database object (usually a tablespace, table, or index) into two or more physical locations, or partitions, that is, a splitting of a logical group of pages (for example, the pages of a table) into chains or files which are physically removed from each other, perhaps on separate disks. Informix calls this fragmentation." (Peter Gulutzan & Trudy Pelzer, "SQL Performance Tuning", 2002)

"The process of replacing a table with multiple smaller tables. Each smaller table has the same format as the original table, but with a subset of the data. Each partitioned table has rows allocated to it based on some characteristic of the data, such as specific key ranges. The rules that define into which table the rows go must be unambiguous. For example, a table is partitioned into two tables. All rows with primary key values lower than a specified value are allocated to one table, and all keys equal to or greater than the value are allocated to the other. Partitioning can improve application processing speeds and reduce the potential for conflicts in multisite update replication. You can improve the usability of partitioned tables by creating a view. The view, created by a union of select operations on all the partitioned tables, presents the data as if it all resided in a single table." (Anthony Sequeira & Brian Alderman, "The SQL Server 2000 Book", 2003)

"Physical splitting of tables into separate sections (partitions), including parallel processing on multiple partitions and individual operations on individual partitions. One particularly efficient aspect is the capability when querying a table to read fewer than all the partitions making up a table, perhaps even a single partition. This is also known as partition pruning." (Gavin Powell, "Beginning Database Design", 2006)

[data partitioning:] "Process of moving data from a single server to one or more different data repositories. This can be vertical, in which data from a single table is split into multiple tables, or horizontal, in which the number of rows in a table are restricted and partitioned by one or more columns." (Sara Morganand & Tobias Thernstrom , "MCITP Self-Paced Training Kit : Designing and Optimizing Data Access by Using Microsoft SQL Server 2005 - Exam 70-442", 2007)

"The process of replacing a table with multiple smaller units. Tables can be partitioned horizontally or vertically." (Victor Isakov et al, "MCITP Administrator: Microsoft SQL Server 2005 Optimization and Maintenance (70-444) Study Guide", 2007)

"The method for dividing a database into manageable parts for the purpose of easier management and better performance." (Paulraj Ponniah, "Data Warehousing Fundamentals for IT Professionals", 2010)

"The process of splitting a table into subsets of rows or columns." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management" 9th Ed., 2011)

"The process of replacing a table with multiple smaller tables." (Microsoft, "SQL Server 2012 Glossary,", 2012)

"With respect to distributed databases, partitioning refers to splitting documents, tables, or graphs and distributing them to different servers." (Dan Sullivan, "NoSQL for Mere Mortals®", 2015)

[index-controlled partitioning:] "A type of partitioning in which partition boundaries for a partitioned table are controlled by values that are specified on the CREATE INDEX statement." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

"Partitioning is the spreading of data across multiple files across a cluster to balance large amounts of data across disks or nodes. Read-only partitions make a read-only table space that prevents updates on all tables in the table space. Other patterns can be applied on this table space to improve performance." (Piethein Strengholt, "Data Management at Scale", 2020)

[composite partitioning:] "In partitioning strategy in which a table is partitioned by one data distribution method and then each partition is further divided into subpartitions using a second data distribution method." (Oracle, "Oracle Database Concepts")

"The ability to decompose very large tables and indexes into smaller and more manageable pieces called partitions." (Oracle, "Oracle Database Concepts")
Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.