In SQL Server, temporary tables are stored in tempdb database and the front end (FE) SQL Server from a serverless SQL Server pool makes no exception from it, however there's no such database listed in the list of databases available. Watching today Brian Bønk's session on "Azure Synapse serverless - CETAS vs Views vs Parquet" at Data Toboggan winter edition 2023, the speaker pointed out that the tables available in tempdb database can be actually queried:
-- Azure Serverless SQL pool: tempdb tables SELECT top 10 * FROM tempdb.information_schema.tables
TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE |
tempdb | dbo | #dm_continuous_copy_status_..._0000000029C7 | BASE TABLE |
tempdb | dbo | diff_MonDmSbsConnections | BASE TABLE |
tempdb | dbo | diff_MonTceMasterKeys | BASE TABLE |
tempdb | dbo | diff_MonTceColEncryptionKey | BASE TABLE |
tempdb | dbo | diff_MonAutomaticTuningState | BASE TABLE |
tempdb | dbo | diff_MonDmTranActiveTransactions | BASE TABLE |
tempdb | dbo | diff_MonTceEnclaveUsageInfo | BASE TABLE |
tempdb | dbo | diff_MonTceEnclaveColEncryptionKey | BASE TABLE |
tempdb | dbo | diff_MonTceEnclaveMasterKeys | BASE TABLE |
tempdb | dbo | diff_MonTceColumnInfo | BASE TABLE |
Moreover, the content of the system tables available can be queried, even if some of the tables might have no data:
How about the temporary tables created? To check this, let's reuse a temp table created recently for listing the DMVs available in the serverlss SQL pool:
-- Azure Serverless SQL pool: checking a table's content SELECT top 10 * FROM tempdb.dbo.dmv_view_run_history
-- dropping the temp table DROP TABLE IF EXISTS dbo.#views; -- create the temp table CREATE TABLE dbo.#views ( ranking int NOT NULL , view_name nvarchar(150) NOT NULL ) -- inserting a few records INSERT INTO #views SELECT row_number() OVER(ORDER BY object_id) ranking , concat(schema_name(schema_id),'.', name) view_name FROM sys.all_views obj WHERE obj.Type = 'V' AND obj.is_ms_shipped = 1 AND obj.name LIKE 'dm_exec_requests%' -- checking temp table's content SELECT * FROM dbo.#views
ranking | view_name |
1 | sys.dm_exec_requests |
2 | sys.dm_exec_requests_history |
Here's temp table's metadata:
-- checking temp table's metadata SELECT * FROM tempdb.information_schema.tables WHERE table_name like '#views%'
TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE |
tempdb | dbo | #views_____..._____00000000295B | BASE TABLE |
Of course, you can query also the tempdb.sys.all_objects:
-- checking temp table's metadata SELECT * FROM tempdb.sys.all_objects WHERE name like '#views%'
You can use now the table name returned by any of the two queries to call the table (just replace the name accordingly):
-- querying the table via tempdb (name will differ) SELECT * FROM tempdb.dbo.[#views______________________________________________________________________________________________________________00000000295B]
ranking | view_name |
1 | sys.dm_exec_requests |
2 | sys.dm_exec_requests_history |
The benefit of knowing that is neglectable, however the topic is interesting more for the ones who want to know how the pools in Azure Synapse work, what was kept or removed compared with the standard editions of SQL Server.
Thus, another topic interesting for DBAs would be how many files are created for the tempdb, given that the database is used to store intermediate data for the various operations. Finding the optimal number of tempdb files and configuring them was and still is one of the main concerns when configuring an SQL Server instance for optimal performance.
The old query developed for standard SQL Server seems to work as well:
-- Azure Serverless SQL pool: tempdb files SELECT dbf.file_id , dbf.name file_name --, dbf.physical_name , dsp.name file_group --, type , dbf.type_desc file_type --, dbf.growth growth_kb , Cast(dbf.growth/128.0 as decimal(18,2)) growth_mb , dbf.is_percent_growth --, dbf.max_size max_size_kb , Cast(NullIf(dbf.max_size, -1)/128.0 as decimal(18,2)) max_size_mb --, dbf.size file_size_kb , Cast(dbf.size/128.0 as decimal(18,2)) file_size_mb , dbf.state_desc , dbf.is_read_only FROM tempdb.sys.database_files dbf LEFT JOIN tempdb.sys.data_spaces dsp ON dbf.data_space_id = dsp.data_space_id ORDER BY dbf.Name
file_id | file_name | file_group | file_type | growth_mb | is_percent_growth | max_size_mb | file_size_mb | state_desc | is_read_only |
1 | tempdev | PRIMARY | ROWS | 32 | 0 | 102670 | 16 | ONLINE | 0 |
3 | tempdev2 | PRIMARY | ROWS | 32 | 0 | 102670 | 16 | ONLINE | 0 |
4 | tempdev3 | PRIMARY | ROWS | 32 | 0 | 102670 | 16 | ONLINE | 0 |
5 | tempdev4 | PRIMARY | ROWS | 32 | 0 | 102670 | 16 | ONLINE | 0 |
2 | templog | NULL | LOG | 64 | 0 | 16384 | 255 | ONLINE | 0 |
So, there were created 4 data files of 16 MB each, with a fix growth of 32 MB, and the files can grow up to about 100 GB. What would be interesting to check is how the files grow under heavy workload, what kind of issues this raises, etc. At least in serverless SQL pools many of the views one used to use for troubleshooting aren't accessible. Maybe one doesn't have to go that far, given that the resources are managed automatically.
Notes:
(1) Microsoft recommends not to drop the temporary tables explicitly, but let SQL Server handle this cleanup automatically and take thus advantage of the Optimistic Latching Algorithm, which helps prevent contention on TempDB [1].
Happy coding!
No comments:
Post a Comment