31 January 2025

🌌🏭KQL Reloaded: First Steps (Part II: Working with Dates)

Knowing how to work with dates and manipulate them accordingly is probably one of the important pieces of knowledge in any query language. Fortunately, KQL has many date-related functions that can help developers in the process and the available documentation is detailed enough to support users in learning their use. The current post focuses on the main uses of dates. 

Please note that the scripts consider the main time units. Please check the documentation for functions' call for other time units.

Create dates via make_datetime, now and ago functions:

// creating dates from parts
print datetime1 = make_datetime(2025,1,1)
, datetime2 = make_datetime(2025,1,1,0,0,0)
, datetime3 = make_datetime(2025,1,1,1,2,3)

// creating dates from values
print date1 = datetime(2024-02-01)
, datetime(2024-02-01 00:00:00)
, datetime(2024-02-01 23:59:59)
, datetime(2024-02-01 23:59:59.123)

// now vs todatetime (return the same value)
print datetime1 = now()
, datetime2 = todatetime(now())
, datetime3 = ago(0h)

Extract information from dates via datetime_part, substring and individual functions: getyear, week_of_year, monthofyear, dayofyear, hourofday, dayofmonth, dayofweek, dayofyear, dayofmonth, dayofweek, endofyear, endofweek, endofmonth, endofday, startofyear, startofweek, startofmonth, startofday:

// date parts front date
print now = now()
, year = datetime_part('year', now())
, month = datetime_part('month', now())
, day = datetime_part('day', now())
, hour = datetime_part('hour', now())
, minute = datetime_part('minute', now())
, second = datetime_part('second', now())

// date parts from string 
let t = datetime("2024-12-31 10:35:59");
print year = substring(t, 0, 4) 
, month = substring(t, 5, 2) 
, day = substring(t, 8, 2) 
, hour = substring(t, 11, 2) 
, minute = substring(t, 14, 2) 
, second = substring(t, 17, 2) 

// date parts via functions
print year = getyear(now())
, week = week_of_year(now())//ISO 8601 compliant
, month = monthofyear(now())
, day = dayofyear(now())
, hour = hourofday(now())

// day functions
print year = dayofyear(now())
, month = dayofmonth(now())
, day = dayofweek(now())

// end of time dates
print year = endofyear(now())
, week = endofweek(now())
, month = endofmonth(now())
, day = endofday(now())

// start of time dates
print year = startofyear(now())
, week = startofweek(now())
, month = startofmonth(now())
, day = startofday(now())

//time units
print hours1 = time(1hour)
, minutes1 = time(1min)
, seconds1 = time(1second)
, hours2 = totimespan("0.01:00:00")
, minutes2 = totimespan("0.00:01:00")
, seconds2 = totimespan("0.00:00:01")

Working with dates via datetime_add and datetime_diff functions:

// adding time units
print year = datetime_add('year',1,now())
, month = datetime_add('month',1,now())
, day = datetime_add('day',1,now())
, hour = datetime_add('hour',1,now())
, minutes = datetime_add('minute',1,now())
, seconds =  datetime_add('second',1,now())

// data differences
print years = datetime_diff('year', now(), ago(1h))
, months = datetime_diff('month', now(), ago(1h))
, days = datetime_diff('day', now(), ago(1h))
, hours = datetime_diff('hour', now(), ago(1h))
, minutes = datetime_diff('minute', now(), ago(1h))
, seconds = datetime_diff('second', now(), ago(1h))

Working with time zones via datetime_local_to_utc and datetime_utc_to_local functions:

// local time across time zones
print ParisTime = datetime_local_to_utc(now(),'Europe/Paris')
, LondonTime = datetime_local_to_utc(now(),'Europe/London')
, LondonTime = datetime_local_to_utc(now(),'Europe/Budapest')
, AthensTime = datetime_local_to_utc(now(),'Europe/Athens') // local time across time zones print ParisTime = datetime_utc_to_local(now(),'Europe/Paris') , LondonTime = datetime_utc_to_local(ago(1h),'Europe/London') , BudapestTime = datetime_utc_to_local(ago(-1h),'Europe/Budapest') , AthensTime = datetime_utc_to_local(ago(-2h),'Europe/Athens')

Applying different formatting with tostring, format_timespan and format_datetime functions:

// date and time to string
print date1 = tostring(datetime(2024-02-01 00:00:00))
, time1 = tostring(totimespan("0.01:02:03"))
, hours1 = tostring(time(1hour))

// formatting timespans
let t = time("25.10:35:59.123456");
print date1 = format_timespan(t, 'dd.hh:mm:ss:FF')
, date2 = format_timespan(t, 'ddd.h:mm:ss [ffff]')

// formatting dates
let t = datetime("2024-12-31 10:35:59");
print date1 = format_datetime(t,'yyyy-MM-dd [HH:mm:ss]')
, date2 = format_datetime(t,'yy-MM-dd HH:mm:ss')
, date3 = format_datetime(t,'yy-MM-dd [HH:mm:ss tt]')

Considering the above functions, one has a good basis for working with dates. 

Happy coding!

Previous Post <<||>> Next Post

29 January 2025

🌌🏭KQL Reloaded: First Steps (Part I: Simple Queries)

If one has followed the Microsoft training, webcasts and other resources, it becomes clear that the Kusto Query Language (KQL) can't be ignored as it's useful in various scenarios thar deal with large volumes of data. 

Even if KQL was created to query big tables, at least for the first steps it's recommended to start with a small data model, and when the basics were understood one can move to bigger databases. Moreover, it's easier to validate the logic when using small datasets.

Probably, the most important aspect before starting is that KQL is case-sensitive and this applies to everything – table and column names, operators, functions, etc. The below code can be tried in the online explorer (see [2]), which makes available several databases for playing around. The users might need to register beforehand.

The following queries are based on the ContosoSales database (available in the above mentioned link). First, here are some simple projections. Each query is preceded by its short description in which the text was commented via "//" and must be run individually. 

// selecting all records
Customers 

// selecting all records
Customers 
| take 10

// multiple filters
Customers 
| where CityName == 'Berkeley'
| where Occupation != 'Professional'
| take 10

// multiple filters on the same column
Customers 
| where Occupation == 'Professional' and Occupation == 'Clerical')
| take 10

// multiple filters on the same column
Customers 
| where Occupation in ('Professional','Clerical')
| take 10

// multiple filters on the same column
Customers 
| where not(Occupation in ('Professional','Clerical'))
| take 10

//subset of columns
Customers
| take 5
| project ContinentName, CityName, FirstName, LastName, Gender

Here are some example for the selection of unique values, the equivalent of SELECT DISTINCT from SQL:

//distinct values used
Customers
| distinct  Occupation

//distinct values used sorted ascendingly 
Customers
| distinct  Occupation
| sort by Occupation asc

//combinations of values used
Customers
| distinct  Occupation, Education
| sort by Occupation asc, Education asc

When further data is needed, one needs to resume to grouping values, the equivalent of GROUP BY:

// record count
Customers 
| count

// record count for constraint
Customers 
| where CityName == 'Berkeley'
| count

// record count for constraint: returns 0 records (KQL is case sensitive)
Customers 
| where CityName == 'BERKELEY'
| count

// numnber of records by occupation 
Customers
| summarize occupations_count = count() by Occupation

// numnber of records by occupation with bar chart visual
Customers
| summarize occupations_count = count() by Occupation
| render barchart

The last query renders the data directly to a bar chart, which is a cool feature, especially when is needed to understand the distribution of values. Executing the query without the last line renders the initial dataset.

Azure Data Explorer - Chart example
Azure Data Explorer - Chart Example

Here are some first impressions:
1) The language is relatively simple, though the transition from SQL to KQL requires time, even if the thinking process of writing the code is not that far away. For those with no SQL knowledge, the transition might be a bit more challenging, though practice makes perfect!
2) One can try to run the code line by line to understand the steps used.
3) There are also some online code converters from SQL to KQL (see for example msticpy).
4) The intellisense capabilities facilitate the overall experience. 
5) Unfortunately, there seems to be no code formatter to HTML for KQL, so one needs to compromise in one way or another. 

Happy coding!

Previous Post <<||>> Next Post 

KQL Reloaded: First Steps (Part II: Working with Dates)

References:
[1] Microsoft Learn (2024) Kusto Query Language learning resources [link]
[2] TestingSpot Blog (2021) Learning Kusto Query Language - A tool for performance test engineers [link]

27 January 2025

🗄️🗒️Data Management: Data Quality Dimensions [Notes]

Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes. 

Last updated: 27-Jan-2025

[Data Management] Data quality dimensions

  • {def} features of data that can be measured or assessed against defined standards to determine the quality of data
    • captures a specific aspect of general data quality
      • can refer to data values or to their schema
  • {type} hard dimensions
    • dimensions that can be measured
  • {type} soft dimensions
    • dimensions that can be measured only indirectly
      • ⇐ through interviews with data users or through any other kind of communication with users
    • dimensions whose measurement depends on the perception of the users of the data
  • {dimension} uniqueness [post]
    • the degree to which a value or set of values is unique within a dataset
      • can be determined based on a set of values supposed to be unique across the whole dataset
        • some systems have a artificial, respectively natural unique identified
      • measured in terms of either
        • the percentage of unique values available in a dataset 
        • the percentage of duplicate values available in a dataset
    • the impossibility of identifying whether a value is unique increases the chances for it to be duplicated
    • it can have broader implications
      • aggregated information is not shown correctly
        • ⇐ split across different entities
      • can lead to further duplicates in other areas
    • {recommendation} enforce uniqueness by design, if possible
    • {recommendation} check the data regularly for duplicates and disable or delete the duplicated records
      • ⇐ one should make sure that the records can't be further reused in business processes or analytics workloads
  • {dimension} completeness [post]
    •  the extent to which there are missing data in a dataset
      • ⇐ reflected in the number of the missing values
        • measured as percentage of the missing values compared to the total
    • determined by the presence of NULL values  
      • {type} attribute completeness 
        • the number of NULLs in a specific attribute
      • {type} tuple completeness 
        • the number of unknown values of the attributes in a tuple
      • {type} relation completeness 
        • the number of tuples with unknown attribute values in the relation
      • {type} value completeness
        • makes sense for complex, semi-structured columns such as XML data type columns
          • e.g. a complete element or attribute can be missing
    • considered in report to 
      • mandatory attributes
        • attributes that need a not-Null value for each record
      • optional attributes
        • attributes that not necessarily need to be provided
      • inapplicable attributes
        • attributes not applicable (relevant) for certain scenarios by design
  • {dimension} conformity (aka format compliance) [post]
    • {def} the extent data are in the expected format
      • dependent on the data type and its definition
    • can be associated with a set of metadata 
      • data type
        • e.g. text, numeric, alphanumeric, positive, date
      • length
      • precision
      • scale
      • formatting patterns 
        • e.g. phone number, decimal and digit grouping symbols
        • different formatting might apply based on various business rules 
        • can use delimiters
    • {recommendation} define the data type and further constraints to enforce the various characteristics of the element
    • {recommendation} make sure that the delimiters don't overlap with other uses
  • {dimension} accuracy [post]
    • {def} the extent data is correct, respectively match the reality with an acceptable level of approximation
    • stricter than just conforming to business rules
    • can be measured at column and table level
      • [discrete data values] 
        • use frequency distribution of values
          • a value with very low frequency is probably incorrect
      • [alphanumeric values]
        • use string length distribution
          • a  string with a very atypical length is potentially incorrect
        • try to find patterns and then create pattern distribution.
          • patterns with low frequency probably denote wrong values
      • [continuous attributes]
        • use descriptive statistics
          • just by looking at minimal and maximal values, you can easily spot potentially problematic data
  • {dimension} consistency [post]
    • {def} the degree of uniformity, standardization, and freedom from contradiction among the documents or parts of a system or component
      • {type} notational consistency
        • the extent (data) values are consistent in notation
      • {type} semantic consistency
        • the degree to which data has unique meaning
        • is more restrictive than the notational consistency
    • measures the equivalence of information stored in various repositories
    • involves comparing values with a predefined set of possible values
      •  from the same or from different systems
    • can be measured at column and table level
    • can have different scopes
      • cross-system consistencies
        • among systems or data repositories
      • cross-record consistency
        • within the same repository
      • temporal consistency
        •  within the same record at different points in time
  • {dimension} timeliness [post]
    • tells the degree to which data is current and available when needed
      • there is always some delay between change in the real world and the moment when this change is entered into a system
    • stale data/obsolete data
  • {dimension} structuredness [post]
    • the degree to which a data structure or model possesses a definite pattern of organization of its interdependent parts
    • allows the categorization of data as
      • structured data [def]
        • refers to structures that can be easily perceived or known, that raises no doubt on structure’s delimitations
      • unstructured data [def]
        • refers to textual data and media content (video, sound, images), in which the structural patterns even if exist they are hard to discover or not predefined
      • semi-structured data [def]
        • refers to islands of structured data stored with unstructured data, or vice versa
      • ⇐ the more structured the data, the easier it is to be processed
  • {dimension} referential integrity [post]
    • {def} the degree to which the values of a key in one table (aka reference value) match the values of a key in a related table (aka the referenced value)
    • it's an architectural concept of the database 
    • {recommendation} keep the referential integrity of a system by design
      • some systems build logic for assuring the referential integrity in the applications and not in the database
  • {dimension} currency (aka actuality)
    •  the extent to which data is actual
    •  can be considered as a special type of accuracy 
      • ⇐ when the data is not actual then it doesn’t reflect reality
  • {dimension} ease of use
    • the extent to which data can be used for a given purpose
      • usually it refers to whether the data can be processed as needed
      • depends on the application or on the user interface
  • {dimension} fitness of use
    • the degree to which the data is fit for use
      • the data may have good quality for a given purposes but 
        • not usable for other purposes
        • can be used as substitute for other data
          • e.g. use phone area codes instead of ZIP codes to locate customers approximately
  • {dimension} trustfulness [post]
    • the degree to which the data can be trusted
      • is a matter of perception 
        • ask users whether they trust the data and which are the reasons
          • if the users don’t trust the data 
            • they will create their own solutions
            • they will not use applications
  • {dimension} entropy
    • {def} the average amount of information conveyed
      • ⇐ quantification of information in a system
      • ⇐ the more dispersed the values and the more the frequency distribution of a discrete column is equally spread among the values, the more information is available [1]
      • ⇐ can tell whether your data is suitable for analysis or not
    • can be measured at column and table level
  • {dimension} presentation quality 
    • applicable to applications that presents data
      • format and appearance should support the appropriate use of data
      • depends on the UI used
  • {recommendation} have a dedicated system for maintaining the master data and broadcast the data to the subscribers as needed
    • the data should be exclusively managed though the management system
    • {anti-pattern} data is modified in the subscribers and the changes aren't always reflected back to the source system
Previous Post <<||>>  Next Post

References:
[1] Dejan Sarka et al (2012) Exam 70-463: Implementing a Data Warehouse with Microsoft SQL Server 2012 (Training Kit)

26 January 2025

🧭Business Intelligence: Perspectives (Part XXV: Grounding the Roots)

Business Intelligence Series
Business Intelligence Series

When building something that is supposed to last, one needs a solid foundation on which the artifact can be built upon. That’s valid for castles, houses, IT architectures, and probably most important, for BI infrastructures. There are so many tools out there that allow building a dashboard, report or other types of BI artifacts with a few drag-and-drops, moving things around, adding formatting and shiny things. In many cases all these steps are followed to create a prototype for a set of ideas or more formalized requirements keeping the overall process to a minimum. 

Rapid prototyping, the process of building a proof-of-concept by focusing at high level on the most important design and functional aspects, is helpful and sometimes a mandatory step in eliciting and addressing the requirements properly. It provides a fast road from an idea to the actual concept, however the prototype, still in its early stages, can rapidly become the actual solution that unfortunately continues to haunt the dreams of its creator(s). 

Especially in the BI area, there are many solutions that started as a prototype and gained mass until they start to disturb many things around them with implications for security, performance, data quality, and many other aspects. Moreover, the mass becomes in time critical, to the degree that it pulled more attention and effort than intended, with positive and negative impact altogether. It’s like building an artificial sun that suddenly becomes a danger for the nearby planet(s) and other celestial bodies. 

When building such artifacts, it’s important to define what goals the end-result must or would be nice to have, differentiating clearly between them, respectively when is the time to stop and properly address the aspects mandatory in transitioning from the prototype to an actual solution that addresses the best practices in scope. It’s also the point when one should decide upon solution’s feasibility, needed quality acceptance criteria, and broader aspects like supporting processes, human resources, data, and the various aspects that have impact. Unfortunately, many solutions gain inertia without the proper foundation and in extremis succumb under the various forces.

Developing software artifacts of any type is a balancing act between all these aspects, often under suboptimal circumstances. Therefore, one must be able to set priorities right, react and change direction (and gear) according to the changing context. Many wish all this to be a straight sequential road, when in reality it looks more like mountain climbing, with many peaks, valleys and change of scenery. The more exploration is needed, the slower the progress.

All these aspects require additional time, effort, resources and planning, which can easily increase the overall complexity of projects to the degree that it leads to (exponential) effort and more important - waste. Moreover, the complexity pushes back, leading to more effort, and with it to higher costs. On top of this one has the iteration character of BI topics, multiple iterations being needed from the initial concept to the final solution(s), sometimes many steps being discarded in the process, corners are cut, with all the further implications following from this. 

Somewhere in the middle, between minimum and the broad overextending complexity, is the sweet spot that drives the most impact with a minimum of effort. For some organizations, respectively professionals, reaching and remaining in the zone will be quite a challenge, though that’s not impossible. It’s important to be aware of all the aspects that drive and sustain the quality of artefacts, data and processes. There’s a lot to learn from successful as well from failed endeavors, and the various aspects should be reflected in the lessons learned. 

24 January 2025

🧭Business Intelligence: Perspectives (Part XXIV: Building Castles in the Air)

Business Intelligence Series
Business Intelligence Series

Business users have mainly three means of visualizing data – reports, dashboards and more recently notebooks, the latter being a mix between reports and dashboards. Given that all three types of display can be a mix of tabular representations and visuals/visualizations, the difference between them is often neglectable to the degree that the terms are used interchangeably. 

For example, in Power BI a report is a "multi-perspective view into a single semantic model, with visualizations that represent different findings and insights from that semantic model" [1], while a dashboard is "a single page, often called a canvas, that uses visualizations to tell a story" [1], a dashboards’ visuals coming from one or more reports [2]. Despite this clear delimitation, the two concepts continue to be mixed and misused in conversations even by data-related professionals. This happens also because in other tools the vendors designate as dashboard what is called report in Power BI. 

Given the limited terminology, it’s easy to generalize that dashboards are useless, poorly designed, bad for business users, and so on. As Stephen Few recognized almost two decades ago, "most dashboards fail to communicate efficiently and effectively, not because of inadequate technology (at least not primarily), but because of poorly designed implementations" [3]. Therefore, when people say that "dashboards are bad" refer to the result of poorly implementations, of what some of them were part of, which frankly is a different topic! Unfortunately, BI implementations reflect probably more than any other areas how easy is to fail!

Frankly, here it is not necessarily the poor implementation of a project management methodology at fault, which quite often happens, but the way requirements are defined, understood, documented and implemented. Even if these last aspects are part of the methodologies, they are merely a reflection of how people understand the business. The outcomes of BI implementations are rooted in other areas, and it starts with how the strategic goals and objectives are defined, how the elements that need oversight are considered in the broader perspectives. The dashboards become thus the end-result of a chain of failures, failing to build the business-related fundament on which the reporting infrastructure should be based upon. It’s so easy to shift the blame on what’s perceptible than on what’s missing!

Many dashboards are built because people need a sense of what’s happening in the business. It starts with some ideas based on the problems identified in organizations, one or more dashboards are built, and sometimes a lot of time is invested in the process. Then, some important progress is made, and all comes to a stale if the numbers don’t reveal something new, important, or whatever users’ perception is. Some might regard this as failure, though as long as the initial objectives were met, something was learned in the process and a difference was made, one can’t equate this with failure!

It’s more important to recognize the temporary character of dashboards, respectively of the requirements that lead to them and build around them. Of course, this requires occasionally a different approach to the whole topic. It starts with how KPIs and other business are defined and organized, respectively on how data repositories are built, and it ends with how data are visualized and reported.

As the practice often revealed, it’s possible to build castles in the air, without a solid foundation, though the expectation for such edifices to sustain the weight of businesses is unrealistic. Such edifices break with the first strong storm and unfortunately it's easier to blame a set of tools, some people or a whole department instead at looking critically at the whole organization!


References:
[1] Microsoft Learn (2024) Power BI: Glossary [link]
[2] Microsoft Learn (2024) Power BI: Dashboards for business users of the Power BI service [link
[3] Stephen Few, "Information Dashboard Design", 2006

23 January 2025

💎SQL Reloaded: Number of Records VI (via sp_MSForEachTable Undocumented Stored Procedure)

Starting with SQL Server 2000 it's possible to execute a command via the undocumented stored procedure sp_MSForEachTable for each table available in a database, respectively for subsets of the tables. In a previous post I shown how the stored procedure can be used in several scenarios, including how to get the total number of records in each set of tables. However, the code used generates a result set for each table, which makes it difficult to aggregate the information for further processing. In many scenarios, it would be useful to store the result as a temporary or even persisted table.

-- dropping the tables
DROP TABLE IF EXISTS #Tables
DROP TABLE IF EXISTS #TablesRecordCount

-- create a temporary table to store the input list
SELECT TableName
INTO #Tables 
FROM (VALUES ('Person.Address')
, ('Person.AddressType')
, ('Person.BusinessEntity')) DAT(TableName)


-- create a temporary table to store the results
CREATE TABLE dbo.#TablesRecordCount (
  table_name nvarchar(150) NOT NULL
, number_records bigint
, run_date datetime2(0)
, comment nvarchar(255)
)

-- getting the number of records for the list of tables into the result table
INSERT INTO #TablesRecordCount
EXEC sp_MSForEachTable @command1='SELECT ''?'' [Table], COUNT(*) numer_records, GetDate() run_date, ''testing round 1'' comment FROM ?'
, @whereand = ' And Object_id In (Select Object_id(TableName) FROM #Tables)'

-- reviewing the result
SELECT *
FROM #TablesRecordCount
ORDER BY number_records DESC

The above solution uses two temporary tables, though it can be easily adapted to persist the result in a standard table: just replace the "#" with the schema part (e.g. "dbo."). This can be useful in troubleshooting scenarios, when the code is run at different points in time, eventually for different sets of tables. 

The code is pretty simple and can be extended as needed. Unfortunately, there's no guarantee that the sp_MSForEachTable stored procedure will be supported in the next versions of the SQL Server. For example, the stored procedure is not available in SQL databases, respectively in Fabric warehouses. In SQL databases the following error is thrown:

"Msg 2812, Level 16, State 62, Line 1, Could not find stored procedure 'sys.sp_MSForEachTable'."

To test whether the feature works in your environment, it's enough to run a call to the respective stored procedure:

-- retrieve the record count for all tables
EXEC sp_MSForEachTable @command1='SELECT ''?'' [Table], COUNT(*) numer_records FROM ?'

Or, you can check whether it works for one table (replace the Person.AddressType table with one from your environment):

-- getting the number of records for the list of tables into another table
EXEC sp_MSForEachTable @command1='SELECT ''?'' [Table], COUNT(*) numer_records FROM ?'
, @whereand = ' And Object_id = Object_id(''Person.AddressType'')'

The solution could prove to be useful in multiple scenarios, though one should consider also the risk of being forced to rewrite the code when the used stored procedure becomes unavailable. Even if it takes more time to write, a solution based on cursors can be more feasible (see previous post).

Update 29-Jan-2025: Probably, despite their usefulness, the undocumented features will not be brought to SQL databases (see [1], 47:30). So, be careful about using the respective features as standard solutions in production environments!

Happy coding!

Previous Post <<||>> Next Post

References:
[1] Microsoft Reactor (2025) Ask The Expert - Fabric Edition - Fabric Databases [link]

💎🏭SQL Reloaded: Number of Records V (via Cursors, a Solution for Warehouses in Microsoft Fabric)

After deploying the sample warehouse available in Microsoft Fabric, I tried to check the number of records available in the deployed tables under the dbo schema. Surprisingly, the sys.partitions.count column has 0 values for all the tables associated with the respective schema (see post). 

There are only a few tables available, and taking a record count for each table should be enough, which is relatively simple with the undocumented sp_MSForEachTable. Unfortunately, this approach doesn't work neither, so one needs to revert to the use of old-fashioned cursors (as I used to do in SQL Server 2000):

-- number of records via cursor
DECLARE @table_name nvarchar(150)
DECLARE @sql nvarchar(250)
DECLARE @number_records bigint 
DECLARE @number_tables int, @iterator int

DROP TABLE IF EXISTS dbo.#tables;

CREATE TABLE dbo.#tables (
  ranking int NOT NULL
, table_name nvarchar(150) NOT NULL
, number_records bigint
)

INSERT INTO #tables
SELECT row_number() OVER(ORDER BY object_id) ranking
, concat(schema_name(schema_id),'.', name) table_name
, NULL number_records
FROM sys.tables obj
WHERE obj.schema_id = schema_id('dbo')
ORDER BY table_name

SET @iterator = 1
SET @number_tables = IsNull((SELECT count(*) FROM #tables), 0)

WHILE (@iterator <= @number_tables)
BEGIN 
    SET @table_name = (SELECT table_name FROM #tables WHERE ranking = @iterator)
    SET @sql = CONCAT(N'SELECT @NumberRecords = count(*) FROM ', @table_name)

	BEGIN TRY
		--get the number of records
		EXEC sp_executesql @Query = @sql
		, @params = N'@NumberRecords bigint OUTPUT'
		, @NumberRecords = @number_records OUTPUT

		IF IsNull(@number_records, 0)> 0  
		BEGIN
                SET @sql = 'UPDATE #tables' 
             + ' SET number_records = ' + Str(@number_records)
             + ' WHERE table_name = ''' + @table_name + '''';

		 EXEC(@sql)
		END 
	END TRY
	BEGIN CATCH  
	 -- no action needed in case of error
        END CATCH;

	SET @iterator = @iterator + 1
END

SELECT *
FROM dbo.#tables;

--DROP TABLE IF EXISTS dbo.#tables;
Results:
ranking table_name number_records
1 dbo.Date 5844
2 dbo.Geography 305179
3 dbo.HackneyLicense 42958
4 dbo.Time 86400
5 dbo.Weather 526330
6 dbo.Trip 2838927
7 dbo.Medallion 13668

Comments:
1) It's a lot of code for a simple task, though the code can be easily duplicated and adapted for similar requirements. Unfortunately, it can lead in time also to many instances of the same code. When possible, one should consider maybe encapsulating the logic in a stored procedure. 
2) It's usually a good idea to check how many records are available in the tables used for testing, as this can impact queries' performance and tables' appropriateness for the tests performed. Moreover, it's a good idea to understand the volume of data when taking over or working with a database. 
3) If one removes the row_number function, the code should run also in SQL Server 2000. Similar solutions were used then for retrieving the record count.
4) Microsoft recommends not to drop the temporary tables explicitly, but let SQL Server handle this cleanup automatically and take thus advantage of the Optimistic Latching Algorithm, which helps prevent contention on TempDB [1]..
5) There are others who stumbled over this issue (see [1]).
6) The solution has been tested successfully also in SQL databases.
7) The whole code must be run together because the temporary table seems to have only a transitory scope! An attempt to rerun the last SELECT from #tables raises the error: "Invalid object name '#tables'"

Happy coding!

Previous Post <<||>> Next Post

References:
[1] Koen Verbeeck (2024) Get row counts of all tables in a Microsoft Fabric warehouse [link]
[2] Haripriya SB (2024) Do NOT drop #temp tables (link)

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.