One reads in the training material that the SQL Endpoint provides a read-only experience [1], meaning that no data can be written back to the delta lake tables. Playing with the metadata available in Spark SQL via Notebooks and the SQL Endpoint (see post), I realized that there is more to the statement! Even if one can query via the SQL Endpoint only delta tables, this doesn't mean that one can't build a semantic model on top of it, much like one was able to do via the Serverless SQL pool in Azure Synapse.
In Spark one can create via SQL, PySpark and the other supported languages views and functions, though they will not be available to the SQL Endpoint! To use the data generated in the process, the respective data needs to be saved to delta tables. Conversely, one can still create views, functions and stored procedures via the SQL Endpoint though the objects won't be available in Spark SQL!
This has important implications, though in this post let's focus on the syntax and create several objects for testing purposes in the two environments. I'll use the Assets delta table created in a previous post. The Spark SQL code should be run in a notebook (e.g. one cell per group of statements), while the code for the SQL Endpoint should be run in SQL Server Management Studio.
Views
/* test view's creation in the SQL Endpoint */
-- drop the test view DROP VIEW IF EXISTS dbo.vAssets_Microsoft2; GO -- create the test view CREATE VIEW dbo.vAssets_Microsoft2 AS --Microsoft assets SELECT Id, CreationDate, Vendor, Asset, Model, Owner, Tag, Quantity FROM dbo.Assets WHERE Vendor = 'Microsoft'; GO -- test the viwe SELECT * FROM dbo.vAssets_Microsoft2;
/* test view's creation in Spark SQL */ -- drop test view DROP VIEW IF EXISTS vAssets_Microsoft; -- create test view CREATE VIEW vAssets_Microsoft COMMENT 'Microsoft assets in scope (view)' AS SELECT Id, CreationDate, Vendor, Asset, Model, Owner, Tag, Quantity FROM assets WHERE Vendor = 'Microsoft'; -- review data SELECT * FROM vAssets_Microsoft;
Table-Valued Functions
/* test function's creation in the SQL Endpoint */ -- drop the test function DROP FUNCTION IF EXISTS dbo.fAssets_Microsoft2; GO -- create the test function CREATE FUNCTION dbo.fAssets_Microsoft2( @Vendor nvarchar(max)) RETURNS TABLE AS RETURN ( SELECT Id, CreationDate, Vendor, Asset, Model, Owner, Tag, Quantity FROM dbo.Assets WHERE Vendor = @Vendor ); GO -- test the function SELECT * FROM dbo.fAssets_Microsoft2('Microsoft'); SELECT * FROM dbo.fAssets_Microsoft2('Dell');
/* test function's creation in Spark SQL */ -- drop test function DROP FUNCTION IF EXISTS fAssets_Microsoft; -- create test function CREATE FUNCTION fAssets_Microsoft( pVendor string) RETURNS TABLE AS RETURN SELECT Id, CreationDate, Vendor, Asset, Model, Owner, Tag, Quantity FROM assets WHERE Vendor = pVendor; -- review data SELECT * FROM fAssets_Microsoft('Microsoft');
Stored Procedure
Stored procedures aren't available in Spark SQL, though this doesn't mean that we can't test the code in the SQL Endpoint:/* test procedure's creation in the SQL Endpoint */ -- drop the test procedure DROP PROCEDURE IF EXISTS dbo.spAssets_Microsoft2; GO -- create the test procedure CREATE PROCEDURE dbo.spAssets_Microsoft2( @Vendor nvarchar(max) = NULL) AS --Microsoft assets SELECT Id, CreationDate, Vendor, Asset, Model, Owner, Tag, Quantity FROM dbo.Assets WHERE Vendor = IsNull(@Vendor, Vendor); GO -- test the procedure EXEC dbo.spAssets_Microsoft2 'Microsoft'; EXEC dbo.spAssets_Microsoft2 'Dell'; EXEC dbo.spAssets_Microsoft2;
3) The names between environments were kept different, just in case one needs to test objects' availability between platforms.
Previous Post <<||>> Next Post
Resources:[1] Microsoft Learn (2023) Work with Delta Lake tables in Microsoft Fabric (link)
No comments:
Post a Comment