Showing posts with label Microsoft Fabric. Show all posts
Showing posts with label Microsoft Fabric. Show all posts

28 March 2025

🏭🗒️Microsoft Fabric: OneLake Role-Based Access Control (RBAC) [Notes]

Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes. For the latest information please consult the documentation (see the links below)! 

Last updated: 28-Mar-2025

[Microsoft Fabric] OneLake Role-based access control (RBAC)

  • {def} security framework that allows to manage access to resources by assigning roles to users or groups 
    • applies to Lakehouse Items only [1]
    • restricts data access for users with Workspace Viewer or read access to a lakehouse [1]
    • doesn't apply to Workspace Admins, Members, or Contributors [1]
      • ⇒ supports only Read level of permissions [1]
    • uses role assignments to apply permissions to its members
      • assigned to 
        • individuals
        • security groups
        • Microsoft 365 groups
        • distribution lists
        • ⇐ every member of the user group gets the assigned role [1]
      • users in multiple groups get the highest level of permission that is provided by the roles [1]
    • managed through the lakehouse data access settings [1]
    • when a lakehouse is created, OneLake generates a default RBAC Role named Default Readers [1]
      • allows all users with ReadAll permission to read all folders in the Item [1]
    • permissions always inherit to the entire hierarchy of the folder's files and subfolders [1]
    • provides automatic traversal of parent items to ensure that data is easy to discover [1]
      • ⇐ similar to Windows folder permissions [1]
      • [shortcuts] shortcuts to other OneLake locations have specialized behavior [1]
        • the access to a OneLake shortcut is determined by the target permissions of the shortcut [1]
          • when listing shortcuts, no call is made to check the target access [1]
            • ⇒ when listing a directory all internal shortcuts will be returned regardless of a user's access to the target [1]
              • when a user tries to open the shortcut the access check will evaluate and a user will only see data they have the required permissions to see [1]
    •  enable you to restrict the data access in OneLake only to specific folders [1]
  • {action} share a lakehouse
    • grants other users or a group of users access to a lakehouse without giving access to the workspace and the rest of its items [1]
    • found through 
      • Data Hub 
      • 'Shared with Me' section in Microsoft Fabrics
  • [shortcuts] permissions always inherit to all Internal shortcuts where a folder is defined as target [1]
    • when a user accesses data through a shortcut to another OneLake location, the identity of the calling user is used to authorize access to the data in the target path of the shortcut [1]
      • ⇒ the user must have OneLake RBAC permissions in the target location to read the data [1]
      • defining RBAC permissions for the internal shortcut is not allowed [1]
        • must be defined on the target folder located in the target item [1]
        • OneLake enables RBAC permissions only for shortcuts targeting folders in lakehouse items [1]

Previous Post <<||>> Next Post

References:
[1] Microsoft Learn (2024) Fabric: Role-based access control (RBAC) [link]
[2] Microsoft Learn (2024) Best practices for OneLake security [link]

Resources:
[R1] Microsoft Learn (2025) Fabric: What's new in Microsoft Fabric? [link]

Acronyms:
ADLS - Azure Data Lake Storage
RBAC - Role-Based Access Control

🏭🗒️Microsoft Fabric: Hadoop [Notes]

Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes. For the latest information please consult the documentation (see the links below)! 

Last updated: 28-Mar-2024

[Microsoft Fabric] Hadoop

  • Apache software library
    • backend technology that make storing data and running large-scale parallel computations possible
    • open-source framework 
    • widely adopted 
    • implements special versions of the HDFS
      • enables applications to scale to petabytes of data employing commodity hardware
    • based on MapReduce API 
      • software framework for writing jobs that process vast amounts of data [2] and enables work parallelization
      • {function} Mapper
        • consumes input data, analyzes it, and emits tuples (aka key-value pairs) [2]
        • ⇐ analysis usually involve filter and sorting operations) [2]
      • {function} Reducer
        • consumes tuples emitted by the Mapper and performs a summary operation that creates a smaller, combined result from the Mapper data [2]
  • {benefit} economical scalable storage mode
    • can run on commodity hardware that in turn utilizes commodity disks
      • the price point per terabyte is lower than that of almost any other technology [1]
  • {benefit} massive scalable IO capability
    • aggregate IO and network capacity is higher than that provided by dedicated storage arrays [1]
    • adding new servers to Hadoop adds storage, IO, CPU, and network capacity all at once [1]
      • ⇐ adding disks to a storage array might simply exacerbate a network or CPU bottleneck within the array [1]
  • {characteristic} reliability
    • enabled by fault-tolerant design
    • ability to replicate by MapReduce execution
      • ⇐ detects task failure on one node on the distributed system and restarts programs on other healthy nodes
    • data in Hadoop is stored redundantly in multiple servers and can be distributed across multiple computer racks [1] 
      • ⇐ failure of a server does not result in a loss of data [1]
        • ⇐ the job continues even if a server fails
          • ⇐ the processing switches to another server [1]
      • every piece of data is usually replicated across three nodes
        • ⇐ can be located on separate server racks to avoid any single point of failure [1]
  • {characteristic} scalable processing model
    • MapReduce represents a widely applicable and scalable distributed processing model
    • capable of brute-forcing acceptable performance for almost all algorithms [1]
      • not the most efficient implementation for all algorithms
  • {characteristic} schema on read
    • the imposition of structure can be delayed until the data is accessed
    • ⇐as opposed to the schema on write mode 
    • ⇐used by relational data warehouses
    • data can be loaded into Hadoop without having to be converted to a highly structured normalized format [1]
      • {advantage} data can be quickly ingest from the various forms [1]
        •  this is sometimes referred to as schema on read,  [1]
  • {architecture} Hadoop 1.0
    • mixed nodes
      • the majority of servers in a Hadoop cluster function both as data nodes and as task trackers [1]
        • each server supplies both data storage and processing capacity (CPU and memory) [1]
    • specialized nodes
      • job tracker node 
        • coordinates the scheduling of jobs run on the Hadoop cluster [1]
      • name node 
        • sort of directory that provides the mapping from blocks on data nodes to files on HDFS [1]
      • {disadvantage} architecture limited to MapReduce workloads [1]
      • {disadvantage} it provides limited flexibility with regard to scheduling and resource allocation [1]
  • {architecture} Hadoop 2.0 
    • layers on top of the Hadoop 1.0 architecture [1]
    • {concept} YARN (aka Yet Another Resource Negotiator)
      • improves scalability and flexibility by splitting the roles of the Task Tracker into two processes [1]
        • {process} Resource Manager 
          • controls access to the clusters resources (memory, CPU)
        • {process} Application Manager 
          • (one per job) controls task execution
    • treats traditional MapReduce as just one of the possible frameworks that can run on the cluster [1]
      • allows Hadoop to run tasks based on more complex processing models [1]
  • {concept} Distributed File System 
    • a protocol used for storage and replication of data [1]

Acronyms:
DFS - Distributed File System
DWH - Data Warehouse
HDFS - Hadoop Distributed File System
YARN - Yet Another Resource Negotiator 

References:
[1] Guy Harrison (2015) Next Generation Databases: NoSQL, NewSQL, and Big Data
[2] Microsoft Learn (2024) What is Apache Hadoop in Azure HDInsight? [link

Resources:
[R1] Microsoft Learn (2025) Fabric: What's new in Microsoft Fabric? [link]

26 March 2025

💠🏭🗒️Microsoft Fabric: Polaris SQL Pool [Notes]

Disclaimer: This is work in progress intended to consolidate information from various sources and may deviate from them. Please consult the sources for the exact content!

Unfortunately, besides the references papers, there's almost no material that could be used to enhance the understanding of the concepts presented. 

Last updated: 26-Mar-2025

Read and Write Operations in Polaris [2]

[Microsoft Fabric] Polaris SQL Pool

  • {def} distributed SQL query engine that powers Microsoft Fabric's data warehousing capabilities
    • designed to unify data warehousing and big data workloads while separating compute and state for seamless cloud-native operations
    • based on a robust DCP 
      • designed to execute read-only queries in a scalable, dynamic and fault-tolerant way [1]
      • a highly-available micro-service architecture with well-defined responsibilities [2]
        • data and query processing is packaged into units (aka tasks) 
          • can be readily moved across compute nodes and re-started at the task level
        • widely-partitioned data with a flexible distribution model [2]
        • a task-level "workflow-DAG" that is novel in spanning multiple queries [2]
        • a framework for fine-grained monitoring and flexible scheduling of tasks [2]
  • {component} SQL Server Front End (SQL-FE)
    • responsible for 
      • compilation
      • authorization
      • authentication
      • metadata
        • used by the compiler to 
          • {operation} generate the search space (aka MEMO) for incoming queries
          • {operation} bind metadata to data cells
          • leveraged to ensure the durability of the transaction manifests at commit [2]
            • only transactions that successfully commit need to be actively tracked to ensure consistency [2]
            • any manifests and data associated with aborted transactions are systematically garbage-collected from OneLake through specialized system tasks [2]
  • {component} SQL Server Backend (SQL-BE)
    • used to perform write operations on the LST [2]
      • inserting data into a LST creates a set of Parquet files that are then recorded in the transaction manifest [2]
      • a transaction is represented by a single manifest file that is modified concurrently by (one or more) SQL BEs [2]
        • SQL BE leverages the Block Blob API provided by ADLS to coordinate the concurrent writes  [2]
        • each SQL BE instance serializes the information about the actions it performed, either adding a Parquet file or removing it [2]
          • the serialized information is then uploaded as a block to the manifest file
          • uploading the block does not yet make any visible changes to the file [2]
            • each block is identified by a unique ID generated on the writing SQL BE [2]
        • after completion, each SQL BE returns the ID of the block(s) it wrote to the Polaris DCP [2]
          • the block IDs are then aggregated by the Polaris DCP and returned to the SQL FE as the result of the query [2]
      • the SQL FE further aggregates the block IDs and issues a Commit Block operation against storage with the aggregated block IDs [2]
        • at this point, the changes to the file on storage will become effective [2]
      • changes to the manifest file are not visible until the Commit operation on the SQL FE
        • the Polaris DCP can freely restart any part of the operation in case there is a failure in the node topology [2]
      • the IDs of any blocks written by previous attempts are not included in the final list of block IDs and are discarded by storage [2]
    • [read operations] SQL BE is responsible for reconstructing the table snapshot based on the set of manifest files managed in the SQL FE
      • the result is the set of Parquet data files and deletion vectors that represent the snapshot of the table [2]
        • queries over these are processed by the SQL Server query execution engine [2]
        • the reconstructed state is cached in memory and organized in such a way that the table state can be efficiently reconstructed as of any point in time [2]
          • enables the cache to be used by different operations operating on different snapshots of the table [2]
          • enables the cache to be incrementally updated as new transactions commit [2]
  • {feature} supports explicit user transactions
    • can execute multiple statements within the same transaction in a consistent way
      • the manifest file associated with the current transaction captures all the (reconciled) changes performed by the transaction [2]
        • changes performed by prior statements in the current transaction need to be visible to any subsequent statement inside the transaction (but not outside of the transaction) [2]
    • [multi-statement transactions] in addition to the committed set of manifest files, the SQL BE reads the manifest file of the current transaction and then overlays these changes on the committed manifests [1]
    • {write operations} the behavior of the SQL BE depends on the type of the operation.
      • insert operations 
        • only add new data and have no dependency on previous changes [2]
        • the SQL BE can serialize the metadata blocks holding information about the newly created data files just like before [2]
        • the SQL FE, instead of committing only the IDs of the blocks written by the current operation, will instead append them to the list of previously committed blocks
          • ⇐ effectively appends the data to the manifest file [2]
    • {update|delete operations} 
      • handled differently 
        • ⇐ since they can potentially further modify data already modified by a prior statement in the same transaction [2]
          • e.g. an update operation can be followed by another update operation touching the same rows
        • the final transaction manifest should not contain any information about the parts from the first update that were made obsolete by the second update [2]
      • SQL BE leverages the partition assignment from the Polaris DCP to perform a distributed rewrite of the transaction manifest to reconcile the actions of the current operation with the actions recorded by the previous operation [2]
        • the resulting block IDs are sent again to the SQL FE where the manifest file is committed using the (rewritten) block IDs [2]
  • {concept} Distributed Query Processor (DQP)
    • responsible for 
      • distributed query optimization
      • distributed query execution
      • query execution topology management
  • {concept} Workload Management (WLM)
    •  consists of a set of compute servers that are, simply, an abstraction of a host provided by the compute fabric, each with a dedicated set of resources (disk, CPU and memory) [2]
      • each compute server runs two micro-services
        • {service} Execution Service (ES) 
          • responsible for tracking the life span of tasks assigned to a compute container by the DQP [2]
        • {service} SQL Server instance
          • used as the back-bone for execution of the template query for a given task  [2]
            • ⇐ holds a cache on top of local SSDs 
              • in addition to in-memory caching of hot data
            • data can be transferred from one compute server to another
              • via dedicated data channels
              • the data channel is also used by the compute servers to send results to the SQL FE that returns the results to the user [2]
              • the life cycle of a query is tracked via control flow channels from the SQL FE to the DQP, and the DQP to the ES [2]
  • {concept} cell data abstraction
    • the key building block that enables to abstract data stores
      • abstracts DQP from the underlying store [1]
      • any dataset can be mapped to a collection of cells [1]
      • allows distributing query processing over data in diverse formats [1]
      • tailored for vectorized processing when the data is stored in columnar formats [1] 
      • further improves relational query performance
    • 2-dimenstional
      • distributions (data alignment)
      • partitions (data pruning)
    • each cell is self-contained with its own statistics [1]
      • used for both global and local QO [1]
      • cells can be grouped physically in storage [1]
      • queries can selectively reference either cell dimension or even individual cells depending on predicates and type of operations present in the query [1]
    • {concept} distributed query processing (DQP) framework
      • operates at the cell level 
      • agnostic to the details of the data within a cell
        • data extraction from a cell is the responsibility of the (single node) query execution engine, which is primarily SQL Server, and is extensible for new data types [1], [2]
  • {concept} dataset
    • logically abstracted as a collection of cells [1] 
    • can be arbitrarily assigned to compute nodes to achieve parallelism [1]
    • uniformly distributed across a large number of cells 
      • [scale-out processing] each dataset must be distributed across thousands of buckets or subsets of data objects,
      •  such that they can be processed in parallel across nodes
  • {concept} session
    • supports a spectrum of consumption models, ranging from serverless ad-hoc queries to long-standing pools or clusters [1]
    • all data are accessible from any session [1]
      • multiple sessions can access all underlying data concurrently  [1]
  • {concept} Physical Metadata layer
    • new layer introduced in the SQL Server storage engine [2]
See also: Polaris

References:
[1] Josep Aguilar-Saborit et al (2020) POLARIS: The Distributed SQL Engine in Azure Synapse, Proceedings of the VLDB Endowment PVLDB 13(12) [link]
[2] Josep Aguilar-Saborit et al (2024), Extending Polaris to Support Transactions [link]
[3] Gjnana P Duvvuri (2024) Microsoft Fabric Warehouse Deep Dive into Polaris Analytic Engine [link]

Resources:
[R1] Microsoft Learn (2025) Fabric: What's new in Microsoft Fabric? [link]
[R2] Patrick Pichler (2023) Data Warehouse (Polaris) vs. Data Lakehouse (Spark) in Microsoft Fabric [link]
[R3] Tiago Balabuch (2023) Microsoft Fabric Data Warehouse - The Polaris engine [link]

Acronyms:
CPU - Central Processing Unit
DAG - Directed Acyclic Graph
DB - Database
DCP - Distributed Computation Platform 
DQP - Distributed Query Processing 
DWH - Data Warehouses 
ES - Execution Service
LST - Log-Structured Table
SQL BE - SQL Backend
SQL FE - SQL Frontend
SSD - Solid State Disk
WAL - Write-Ahead Log
WLM - Workload Management

🏭🗒️Microsoft Fabric: External Data Sharing [Notes]

Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes. For the latest information please consult the documentation (see the links below)! 

Last updated: 26-Mar-2025

External data sharing
External data sharing [1]

[Microsoft Fabric] External data sharing 

  • {def} feature that enables Fabric users to share data from their tenant with users in another Fabric tenant (aka cross-tenant sharing) [1]
    • the data is shared in-place from OneLake storage locations in the sharer's tenant [1]
      • ⇒ no data is actually copied to the other tenant [1]
      • creates a OneLake shortcut in the other tenant that points back to the original data in the sharer's tenant [1]
      • data is exposed as read-only [1]
      • data can be consumed by any OneLake compatible Fabric workload in that tenant [1]
    • {benefit} allows for efficient and secure data sharing without duplicating data
      • the shared data remains read-only for the consumer, ensuring data integrity and consistency [2]
      • multiple tables and folders can be shared at once [2]
  • {prerequisite} Fabric admins must turn on external data sharing both in the sharer's tenant and in the external tenant
    • by specifying who can create and accept external data shares [1]
    • users can share data residing in tables or files within supported Fabric items [1]
      • require standard Fabric read and reshare permissions for the item being shared [1]
      • the user creating the share invites a user from another tenant with a link to accept the external data share [1]
        • upon accepting the share, the recipient chooses a lakehouse where a shortcut to the shared data will be created [1]
        • the links work only for users in external tenants
          • for sharing data within the same OneLake storage accounts with users in the same tenant, use OneLake shortcuts [1]
        • {limitation} shortcuts contained in folders that are shared via external data sharing won't resolve in the consumer tenant [1]
    • access is enabled via a dedicated Fabric-to-Fabric authentication mechanism 
    • ⇐ doesn’t require Entra B2B guest user access [1]
  • {operation} create an external data share in the provider tenant)
    • external data shares can be created for tables or files in lakehouses and warehouses, and in KQL, SQL, mirrored databases [1]
    • {limitation} the sharer can't control who has access to the data in the consumer's tenant [1]
  • {operation} accept an external data share in consuming tenant)
    • only lakehouses can be chosen for the operation
    • the consumer can grant access to the data to anyone [1]
      • incl. guest users from outside the consumer's organization [1]
    • data can be transferred across geographic boundaries when it's accessed within the consumer's tenant [1]
  • {operation} revoke extern data shares
    • any user in the sharing tenant with read and reshare permissions on an externally shared item can revoke the external data share at any time [1]
      • via Manage permissions >> External data shares tab
      • can be performed of any item the user has read and reshare permissions on [3]
      • {warning}a revoked external data share can't be restored [3] 
        • irreversibly severs all access from the receiving tenant to the shared data [3]
        • a new external data share can be created instead [3]
  • applies to
    • lakehouse
      • an entire lakehouse schema can be shared [2]
        • shares all the tables in the schema [2]
        • any changes to the schema are immediately reflected in the consumer’s lakehouse [2]
    • mirrored database
    • KQL database
    • OneLake catalog
  • can be consumed via 
    • Spark workloads
      • notebooks or Spark
    • lakehouse SQL Analytics Endpoint
    • semantic models
    • ⇐ data can be shared from a provider and consumed in-place via SQL queries or in a Power BI report [2]
  • {feature} external data sharing APIs
    • support service principals for admin and user operations
      • can be used to automate the creation or management of shares [2]
        • supports service principals or managed identity [2]
    • {planned} data warehouse support
      • share schemas or tables from a data warehouse [2]
    • {planned} shortcut sharing
      • share OneLake shortcuts using external data sharing [2]
      • ⇒ data residing outside of Fabric will be externally shareable
        • S3, ADLS Gen2, or any other supported shortcut location [2]
    • {planned} consumer APIs
      • consumer activities, including viewing share details and accepting shares, will soon be available via API [2]

References:
[1] Microsoft Learn (2024) External data sharing in Microsoft Fabric [link]
[2] Microsoft Fabric Updates Blog (2025) External data sharing enhancements out now [link]
[3] Microsoft Learn (2024) Fabric: Manage external data shares [link]

Resources:
[R1] Analytics on Azure Blog (2024) External Data Sharing With Microsoft Fabric [link]
[R2] Microsoft Learn (2025) Fabric: What's new in Microsoft Fabric? [link]

Acronyms:
ADLS - Azure Data Lake Storage
API - Application Programming Interface
B2B - business-to-business
KQL - Kusto Query Language

25 March 2025

🏭🗒️Microsoft Fabric: Security in Warehouse [Notes]

Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes. For the latest information please consult the documentation (see the links below)! 

Last updated: 25-Mar-2024

[Microsoft Fabric] Security in Warehouse
  • {def} suite of technologies aimed at safeguarding sensitive information in Fabric [1]
    • leverages SQL engine’s security features [1]
      • allows for security mechanism at the warehouse level [1]
      • ⇐ the warehouse and SQL analytics endpoint items also allow for the defining of native SQL security [4]
        • the permissions configured only apply to the queries executed against the respective surfaces [4]
      • the access to OneLake data is controlled separately through OneLake data access roles [4]
        • {recommendation} to ensure users with SQL specific permissions don't see data they don't have SQL access to, don't include those users in a OneLake data access role [4]
    • supports a range of data protection features that enable administrators to shield sensitive data from unauthorized access [1]
      • ⇐ across warehouses and SQL analytics endpoints without necessitating changes to applications [1]
    • {type} object-level security (OLS)
      • permissions governing DML operations [1]
        • applies to tables and views
        • ⇐ when denied, the user will be prevented from performing the respective operation
        • SELECT
          • allows users to view the data within the object [1]
        • INSERT
          • allows users to insert data in the object [1]
        • UPDATE
          • allows users to update data within the object [1]
        • DELETE
          • allows users to delete the data within the object [1]
      • permissions can be granted, revoked or denied on database objects [1]
        •  tables and views
        • GRANT
          • permission is granted to user or role [1]
        • DENY
          • permissions is denied to user or role [1]
        • REVOKE
          • permissions is revoked to user or role [1]
        • ALTER
          • grants the user the ability to change the definition of the object [1]
        • CONTROL
          • grants the user all rights to the object [1]
      • {principle} least privilege
        • users and applications should only be given the permissions needed in order for them to complete the task
    • {type} column-level security (CLS)
      • allows to restrict column access to sensitive data [1]
        • provides granular control over who can access specific pieces of data [1]
          •  enhances the overall security of the data warehouse [1]
      • steps
        • identify the sensitive columns [1]
        • define access roles [1]
        • assign roles to users [1]
        • implement access control [1]
          • restrict access to ta column based on user's role [1]
    • {type} row-level security (RLS)
      • provides granular control over access to rows in a table based on group membership or execution context [1]
        • using WHERE clause filters [1]
      • works by associating a function (aka security predicate) with a table [1]
        • defined to return true or false based on certain conditions [1]
          • ⇐ typically involving the values of one or more columns in the table [1]
          • when a user attempts to access data in the table, the security predicate function is invoked [1]
            • if the function returns true, the row is accessible to the user; otherwise, the row doesn't show up in the query results [1]
        • the predicate can be as simple/complex as required [1]
        • the process is transparent to the user and is enforced automatically by SQL Server
          • ⇐ ensures consistent application of security rules [1]
      • implemented in two main steps:
        • filter predicates 
          • an inline table-valued function that filters the results based on the predicate defined [1]
        • security policy
          • invokes an inline table-valued function to protect access to the rows in a table [1]
            • because access control is configured and applied at the warehouse level, application changes are minimal - if any [1]
            • users can directly have access to the tables and can query their own data [1]
      • {recommendation} create a separate schema for predicate functions, and security policies [1]
      • {recommendation} avoid type conversions in predicate functions [1]
      • {recommendation} to maximize performance, avoid using excessive table joins and recursion in predicate functions [1]
    • {type} dynamic data masking (DDM) 
      • allows to limits data exposure to nonprivileged users by obscuring sensitive data [1]
        • e.g. email addresses 
      • {benefit} enhance the security and manageability of the data [1]
      • {feature} real-time masking
        • when querying sensitive data, DDM applies dynamic masking to it in real time [1]
          • the actual data is never exposed to unauthorized users, thus enhancing the security of your data [1]
        • straightforward to implement [1]
        • doesn’t require complex coding, making it accessible for users of all skill levels [1]
        • {benefit} the data in the database isn’t changed when DDM is applied
          •   the actual data remains intact and secure, while nonprivileged users only see a masked version of the data [1]
      • {operation} define masking rule
        • set up at column level [1]
        • offers a suite of features [1]
          • comprehensive and partial masking capabilities [1]
          • supports several masking types
            • help prevent unauthorized viewing of sensitive data [1]
              • by enabling administrators to specify how much sensitive data to reveal [1]
                •   minimal effect on the application layer [1]
            • applied to query results, so the data in the database isn't changed 
              •   allows many applications to mask sensitive data without modifying existing queries  [1]
          • random masking function designed for numeric data [1]
        • {risk} unprivileged users with query permissions can infer the actual data since the data isn’t physically obfuscated [1]
      • {recommendation} DDM should be used as part of a comprehensive data security strategy [1]
        • should include
          • the proper management of object-level security with SQL granular permissions [1]
          • adherence to the principle of minimal required permissions [1]
    • {concept} Dynamic SQL 
      • allows T-SQL statements to be generated within a stored procedure or a query itself [1]
        • executed via sp_executesql stored procedure
      • {risk} SQL injection attacks
        • use  QUOTENAME to sanitize inputs [1]
  • write access to a warehouse or SQL analytics endpoint
    • {approach} granted through the Fabric workspace roles
      • the role automatically translates to a corresponding role in SQL that grants equivalent write access [4]
      • {recommendation} if a user needs write access to all warehouses and endpoints, assign the user to a workspace role [4]
        • use the Contributor role unless the user needs to assign other users to workspace roles [4]
      • {recommendation} grant direct access through SQL permissions if the user only needs to write to specific warehouses or endpoints [4]
    • {approach} grant read access to the SQL engine, and grant custom SQL permissions to write to some or all the data [4]
  • write access to a warehouse or SQL analytics endpoint
    • {approach} grant read access through the ReadData permission, granted as part of the Fabric workspace roles [4]
      •  ReadData permission maps the user to a SQL role that gives SELECT permissions on all tables in the warehouse or lakehouse
        • helpful if the user needs to see all or most of the data in the lakehouse or warehouse [4]
        • any SQL DENY permissions set on a particular lakehouse or warehouse still apply and limit access to tables [4]
        • row and column level security can be set on tables to restrict access at a granular level [4]
    • {approach} grant read access to the SQL engine, and grant custom SQL permissions to read to some or all the data [4]
    • if the user needs access only to a specific lakehouse or warehouse, the share feature provides access to only the shared item [4]
      • during the share, users can choose to give only Read permission or Read + ReadData 
        • granting Read permission allows the user to connect to the warehouse or SQL analytics endpoint but gives no table access [4]
        • granting users the ReadData permissions gives them full read access to all tables in the warehouse or SQL analytics endpoint
      • ⇐ additional SQL security can be configured to grant or deny access to specific tables [4]

    References:
    [1] Microsoft Learn (2024) Secure a Microsoft Fabric data warehouse [link]
    [2] Data Mozart (2025) Lock Up! Understanding Data Access Options in Microsoft Fabric, by Nikola Ilic [link]
    [3] Microsoft Learn (2024) Security in Microsoft Fabric [link]
    [4] Microsoft Learn (2024) Microsoft Fabric: How to secure a lakehouse for Data Warehousing teams [link]

    Resources:
    [R1] Microsoft Learn (2025) Fabric: What's new in Microsoft Fabric? [link]
    [R2] Microsoft Learn (2025) Fabric: Security for data warehousing in Microsoft Fabric [link]
    [R3] Microsoft Learn (2025) Fabric: Share your data and manage permissions [link]

    Acronyms:
    CLS - Column-Level Security
    DDM - Dynamic Data Masking
    DML - Data Manipulation Language 
    MF - Microsoft Fabric
    OLS - Object-Level Security
    RLS - Row-Level Security
    SQL - Structured Query Language

    🏭🗒️Microsoft Fabric: Security [Notes]

    Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes. For the latest information please consult the documentation (see the links below)! 

    Last updated: 25-Mar-2024

    Microsoft Fabric Security
    Microsoft Fabric Security [2]
    [Microsoft Fabric] Security
    • {def} a comprehensive security framework designed for the Microsoft Fabric platform [1]
      • {goal} always on 
        • every interaction with Fabric is encrypted by default and authenticated using Microsoft Entra ID [1]
          • all communication between Fabric experiences travels through the Microsoft backbone internet [1]
          • data at rest is automatically stored encrypted [1]
          • support for extra security features [1]
            • ⇐ allow to regulate access to Fabric [1]
            • Private Links 
              • enable secure connectivity to Fabric by 
                • restricting access to the Fabric tenant from an Azure VPN
                • blocking all public access
              • ensures that only network traffic from that VNet is allowed to access Fabric features [1]
            • Entra Conditional Access 
          • the connection to data is protected by a firewall or a private network using trusted access [1]
            • access firewall enabled ADL Gen2 accounts securely [1]
              • can be limited to specific workspaces [1]
                • workspaces that have a workspace identity can securely access ADL Gen 2 accounts with public network access enabled, from selected virtual networks and IP addresses [1]
              • workspace identities can only be created in workspaces associated with a Fabric F SKU capacity [1]
          • helps users connect to services quickly and easily from any device and any network [1]
            • each request to connect to Fabric is authenticated with Microsoft Entra ID [1]
              • allows users to safely connect to Fabric from their corporate office, when working at home, or from a remote location [1]
          • {feature} Conditional Access
            • allows to secure access to Fabric on every connection by
              • defining a list of IPs for inbound connectivity to Fabric [1]
              • using MFA [1]
              • restricting traffic based on parameters such as country of origin or device type [1]
      • {goal} compliant
        • data sovereignty provided out-of-box with multi geo capacities [1]
        • support for a wide range of compliance standards [1]
        • Fabric services follow the SDL)
          • a set of strict security practices that support security assurance and compliance requirements [2]
          • helps developers build more secure software by reducing the number and severity of vulnerabilities in software, while reducing development cost [2]
      • {goal} governable
        • leverages a set of governance tools
          • data lineage
          • information protection labels
          • data loss prevention 
          • Purview integration 
      • configurable
        •  in accordance with organizational policies [1]
      • evolving 
        • new features and controls are added regularly [1]
    • {feature} managed private endpoints 
      • allow secure connections to data sources without exposing them to the public network or requiring complex network configurations [1]
        • e.g. as Azure SQL databases
    • {feature} managed virtual networks
      • virtual networks that are created and managed by Microsoft Fabric for each Fabric workspace [1]
      • provide network isolation for Fabric Spark workloads
        • the compute clusters are deployed in a dedicated network and are no longer part of the shared virtual network [1]
      • enable network security features
        • managed private endpoints
        • private link support
    • {feature} data gateway
      • allows to connect to on-premises data sources or a data source that might be protected by a firewall or a virtual network
      • {option} On-premises data gateway
        • acts as a bridge between on-premises data sources and Fabric 1[]
        • installed on a server within the network [1]
        • allows Fabric to connect to data sources through a secure channel without the need to open ports or make changes to the network [1]
      • {option} Virtual network (VNet) data gateway
        • allows to connect from Microsoft Cloud services to Azure data services within a VNet, without the need of an on-premises data gateway [1]
    • {feature} Azure service tags
      • allows to ingest data from data sources deployed in an Azure virtual network without the use of data gateways [1]
        • e.g. VMs, Azure SQL MI and REST APIs
      • can be used to get traffic from a virtual network or an Azure firewall
        • e.g. outbound traffic to Fabric so that a user on a VM can connect to Fabric SQL connection strings from SSMS, while blocked from accessing other public internet resources [1]
    • {feature} IP allow-lists
      • allows to enable an IP allow-list on organization's network to allow traffic to and from Fabric
      • useful for data sources that don't support service tags [1]
        • e.g. on-premises data sources
    • {feature} Telemetry
      • used to maintain performance and reliability of the Fabric platform [2]
      • the telemetry store is designed to be compliant with data and privacy regulations for customers in all regions where Fabric is available [2]
    • {process} authentication
      • relies on Microsoft Entra ID to authenticate users (or service principals) [2]
      • when authenticated, users receive access tokens from Microsoft Entra ID [2]
        • used to perform operations in the context of the user [2]
      • {feature} conditional access
        • ensures that tenants are secure by enforcing multifactor authentication [2]
          • allows only Microsoft Intune enrolled devices to access specific services [1] 
        • restricts user locations and IP ranges.
    • {process} authorization
      • all Fabric permissions are stored centrally by the metadata platform
        • Fabric services query the metadata platform on demand to retrieve authorization information and to authorize and validate user requests [2]
      • authorization information is sometimes encapsulated into signed tokens [2]
        • only issued by the back-end capacity platform [1]
        • include the access token, authorization information, and other metadata [1]
    • {concept} tenant metadata 
      • information about the tenant 
      • is stored in a metadata platform cluster to which the tenant is assigned
        • located in a single region that meets the data residency requirements of that region's geography [2]
        • include customer data 
        • customers can control where their workspaces are located
          • in the same geography as the metadata platform cluster
            • by explicitly assigning workspaces on capacities in that region [2]
            • by implicitly using Fabric Trial, Power BI Pro, or Power BI Premium Per User license mode [2]
              • all customer data is stored and processed in this single geography [2]
          • in Multi-Geo capacities located in geographies (geos) other than their home region [2]
            • compute and storage is located in the multi-geo region [2]
              • (including OneLake and experience-specific storage [2]
            • {exception} the tenant metadata remains in the home region
            • customer data will only be stored and processed in these two geographies [2]
    • {concept} data-at-rest
      • all Fabric data stores are encrypted at rest [2]
        • by using Microsoft-managed keys
        • includes customer data as well as system data and metadata [2]
        •  data is never persisted to permanent storage while in an unencrypted state [1]
          • data can be processed in memory in an unencrypted state [2]
      • {default} encrypted using platform managed keys (PMK)
        • Microsoft is responsible for all aspects of key management [2]
        • data-at-rest on OneLake is encrypted using its keys [3]
        • {alternative} Customer-managed keys (CMK) 
          • allow to encrypt data at-rest using customer keys [3]
            •   customer assumes full control of the key [3]
          • {recommendation} use cloud storage services with CMK encryption enabled and access data from Fabric using OneLake shortcuts [3]
            • data continues to reside on a cloud storage service or an external storage solution where encryption at rest using CMK is enabled [3]
            • customers can perform in-place read operations from Fabric whilst staying compliant [3] 
            • shortcuts can be accessed by other Fabric experiences [3]
    • {concept} data-in-transit
      • refers to traffic between Microsoft services routed over the Microsoft global network [2]
      • inbound communication
        • always encrypted with at least TLS 1.2. Fabric negotiates to TLS 1.3 whenever possible [2]
        • inbound protection
          •  concerned with how users sign in and have access to Fabric [3]
      • outbound communication to customer-owned infrastructure 
        • adheres to secure protocols [2]
          • {exception} might fall back to older, insecure protocols when newer protocols aren't supported [2]
            • incl. TLS 1
        • outbound protection
          • concerned with securely accessing data behind firewalls or private endpoints [3]


    References:
    [1] Microsoft Learn (2024) Security in Microsoft Fabric [link]
    [2] Microsoft Learn (2024) Microsoft Fabric security fundamentals [link]
    [3] Microsoft Learn (2024) Microsoft Fabric end-to-end security scenario [link]

    Resources:
    [R1] Microsoft Learn (2024) Microsoft Fabric security [link]
    [R2] Microsoft Learn (2025) Fabric: What's new in Microsoft Fabric? [link]

    Acronyms:
    ADL - Azure Data Lake
    API - Application Programming Interface
    CMK - Customer-Managed Keys
    MF - Microsoft Fabric
    MFA - Multifactor Authentication 
    MI - Managed Instance 
    PMK - Platform-Managed Keys
    REST - REpresentational State Transfer
    SDL - Security Development Lifecycle
    SKU - Stock Keeping Unit
    TLS  - Transport Layer Security
    VM - Virtual Machine
    VNet - virtual network
    VPN - Virtual Private Network

    18 March 2025

    🏭🗒️Microsoft Fabric: Statistics in Warehouse [Notes]

    Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes. For the latest information please consult the documentation (see the links below)! 

    Last updated: 18-Mar-2024

    [Microsoft Fabric] Statistics
    • {def} objects that contain relevant information about data, to allow query optimizer to estimate plans' costs [1]
      • critical for the warehouse and lakehouse SQL endpoint for executing queries quickly and efficiently [3]
        • when a query is executed, the engine tries to collect existing statistics for certain columns in the query and use that information to assist in choosing an optimal execution plan [3]
        • inaccurate statistics can lead to unoptimized query plans and execution times [5]
    • {type} user-defined statistics
      • statistics defined manually by the users via DDL statement [1]
      • users can create, update and drop statistic 
        • via CREATE|UPDATE|DROP STATISTICS
      • users can review the contents of histogram-based single-column statistics [1]
        • via DBCC SHOW_STATISTICS
          • only, a limited version of these statements is supported [1]
      • {recommendation} focus on columns heavily used in query workloads
        • e.g. GROUP BYs, ORDER BYs, filters, and JOINs
      • {recommendation} consider updating column-level statistics regularly [1]
        • e.g. after data changes that significantly change rowcount or distribution of the data [1]
    • {type} automatic statistics
      • statistics created and maintained automatically by the query engine at query time [1]
      • when a query is issued and query optimizer requires statistics for plan exploration, MF automatically creates those statistics if they don't already exist [1]
        • then the query optimizer can utilize them in estimating the plan costs of the triggering query [1]
        • if the query engine determines that existing statistics relevant to query no longer accurately reflect the data, those statistics are automatically refreshed [1]
          • these automatic operations are done synchronously [1]
            • the query duration includes this time [1]
    • {object type} histogram statistics
      • created per column needing histogram statistics at query time [1]
      • contains histogram and density information regarding the distribution of a particular column [1]
      • similar to the statistics automatically created at query-time in Azure Synapse Analytics dedicated pools [1]
      • name begins with _WA_Sys_.
      • contents can be viewed with DBCC SHOW_STATISTICS
    • {object type} average column length statistics
      • created for variable character columns (varchar) greater than 100 needing average column length at query-time [1]
      • contain a value representing the average row size of the varchar column at the time of statistics creation [1]
      • name begins with ACE-AverageColumnLength_
      • contents cannot be viewed and are nonactionable by users [1]
    • {object type} table-based cardinality statistics
      • created per table needing cardinality estimation at query-time [1]
      • contain an estimate of the rowcount of a table [1]
      • named ACE-Cardinality [1]
      • contents cannot be viewed and are nonactionable by user [1]
    • [lakehouse] SQL analytics endpoint
      • uses the same engine as the warehouse to serve high performance, low latency SQL queries [4]
      • {feature} automatic metadata discovery
        • a seamless process reads the delta logs and from the files folder and ensures SQL metadata for tables is always up to date [4]
          • e.g. statistics [4]
    • {limitation} only single-column histogram statistics can be manually created and modified [1]
    • {limitation} multi-column statistics creation is not supported [1]
    • {limitation} other statistics objects might appear in sys.stats
      • besides the statistics created manually/automatically [1]
        • ⇐ the objects are not used for query optimization [1]
    • {limitation} if a transaction has data insertion into an empty table and issues a SELECT before rolling back, the automatically generated statistics can still reflect the uncommitted data, causing inaccurate statistics [5]
      • {recommendation} update statistics for the columns mentioned in the SELECT [5]
    • {recommendation} ensure all table statistics are updated after large DML transactions [2]


    References:
    [1] Microsoft Learn (2025) Fabric: Statistics in Fabric data warehousing [link
    [2] Microsoft Learn (2025) Fabric: Troubleshoot the Warehouse [link
    [3] Microsoft Fabric Updates Blog (2023) Microsoft Fabric July 2023 Update [link]
    [4] Microsoft Learn (2024) Fabric: Better together: the lakehouse and warehouse [link]
    [5] Microsoft Learn (2024) Fabric: Transactions in Warehouse tables in Microsoft Fabric [link]

    Resources:
    [R1] Microsoft Learn (2025) Fabric: What's new in Microsoft Fabric? [link]

    Acronyms:
    DDL - Data Definition Language
    MF - Microsoft Fabric

    17 March 2025

    🏭🗒️Microsoft Fabric: Z-Order [Notes]

    Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes. For the latest information please consult the documentation (see the links below)! 

    Last updated: 17-Mar-2024

    [Microsoft Fabric] Z-Order
    • {def} technique to collocate related information in the same set of files [2]
      • ⇐ reorganizes the layout of each data file so that similar column values are strategically collocated near one another for maximum efficiency [1]
      • {benefit} efficient query performance
        • reduces the amount of data to read [2] for certain queries
          • when the data is appropriately ordered, more files can be skipped [3]
          • particularly important for the ordering of multiple columns [3]
      • {benefit} data skipping
        • automatically skips irrelevant data, further enhancing query speeds
          • via data-skipping algorithms [2]
      • {benefit} flexibility
        • can be applied to multiple columns, making it versatile for various data schemas
      • aims to produce evenly-balanced data files with respect to the number of tuples
        • ⇐ but not necessarily data size on disk [2]
          • ⇐ the two measures are most often correlated [2]
            • ⇐ but there can be situations when that is not the case, leading to skew in optimize task times [2]
      • via ZORDER BY clause 
        • applicable to columns with high cardinality commonly used in query predicates [2]
        • multiple columns can be specified as a comma-separated list
          • {warning} the effectiveness of the locality drops with each extra column [2]
            • has tradeoffs
              • it’s important to analyze query patterns and select the right columns when Z Ordering data [3]
          • {warning} using columns that do not have statistics collected on them is  ineffective and wastes resources [2] 
            • statistics collection can be configured on certain columns by reordering columns in the schema, or by increasing the number of columns to collect statistics on [2]
        • {characteristic} not idempotent
          • every time is executed, it will try to create a new clustering of data in all files in a partition [2]
            • it includes new and existing files that were part of previous z-ordering [2]
        • compatible with v-order
      • {concept} [Databricks] liquid clustering 
        • replaces table partitioning and ZORDER to simplify data layout decisions and optimize query performance [4] [6]
          • not compatible with the respective features [4] [6]
        • tables created with liquid clustering enabled have numerous Delta table features enabled at creation [4] [6]
        • provides flexibility to redefine clustering keys without rewriting existing data [4] [6]
          • ⇒ allows data layout to evolve alongside analytic needs over time [4] [6]
        • applies to 
          • streaming tables 
          • materialized views
        • {scenario} tables often filtered by high cardinality columns [4] [6]
        • {scenario} tables with significant skew in data distribution [4] [6]
        • {scenario} tables that grow quickly and require maintenance and tuning effort [4] [6]
        • {scenario} tables with concurrent write requirements [4] [6]
        • {scenario} tables with access patterns that change over time [4] [6]
        • {scenario} tables where a typical partition key could leave the table with too many or too few partitions [4] [6]

      References:
      [1] Bennie Haelen & Dan Davis (2024) Delta Lake Up & Running: Modern Data Lakehouse Architectures with Delta Lake
      [2] Delta Lake (2023) Optimizations [link]
      [3] Delta Lake (2023) Delta Lake Z Order, by Matthew Powers [link]
      [4] Delta Lake (2025) Use liquid clustering for Delta tables [link]
      [5] Databricks (2025) Delta Lake table format interoperability [link]
      [6] Microsoft Learn (2025) Use liquid clustering for Delta tables [link]

      Resources:
      [R1] Azure Guru (2024) Z Order in Delta Lake - Part 1 [link]
      [R2] Microsoft Learn (2025) Fabric: What's new in Microsoft Fabric? [link]

      Acronyms:
      MF - Microsoft Fabric 

      🏭🗒️Microsoft Fabric: V-Order [Notes]

      Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes. For the latest information please consult the documentation (see the links below)! 

      Last updated: 17-Mar-2024

      [Microsoft Fabric] V-Order
      • {def} write time optimization to the parquet file format that enables fast reads under the MF compute engine [2]
        • all parquet engines can read the files as regular parquet files [2]
        • results in a smaller and therefore faster files to read [5]
          • {benefit} improves read performance 
          • {benefit} decreases storage requirements
          • {benefit} optimizes resources' usage
            • reduces the compute resources required for reading data
              • e.g. network bandwidth, disk I/O, CPU usage
        • still conforms to the open-source Parquet file format [5]
          • they can be read by non-Fabric tools [5]
        • delta tables created and loaded by Fabric items automatically apply V-Order
          • e.g. data pipelines, dataflows, notebooks [5]
        • delta tables and its features are orthogonal to V-Order [2]
          •  e.g. Z-Order, compaction, vacuum, time travel
          • table properties and optimization commands can be used to control the v-order of the partitions [2]
        • compatible with Z-Order [2]
        • not all files have this optimization applied [5]
          • e.g. Parquet files uploaded to a Fabric lakehouse, or that are referenced by a shortcut 
          • the files can still be read, the read performance likely won't be as fast as an equivalent Parquet file that's had V-Order applied [5]
        • required by certain features
          • [hash encoding] to assign a numeric identifier to each unique value contained in the column [5]
        • {command} OPTIMIZE 
          • optimizes a Delta table to coalesce smaller files into larger ones [5]
          • can apply V-Order to compact and rewrite the Parquet files [5]
      • [warehouse] 
        • works by applying certain operations on Parquet files
          • special sorting
          • row group distribution
          • dictionary encoding
          • compression 
        • enabled by default
        •  ⇒ compute engines require less network, disk, and CPU resources to read data from storage [1]
          • provides cost efficiency and performance [1]
            • the effect of V-Order on performance can vary depending on tables' schemas, data volumes, query, and ingestion patterns [1]
          • fully-compliant to the open-source parquet format [1]
            • ⇐ all parquet engines can read it as regular parquet files [1]
        • required by certain features
          • [Direct Lake mode] depends on V-Order
        • {operation} disable V-Order
          • causes any new Parquet files produced by the warehouse engine to be created without V-Order optimization [3]
          • irreversible operation
            •  once disabled, it cannot be enabled again [3]
          • {scenario} write-intensive warehouses
            • warehouses dedicated to staging data as part of a data ingestion process [1]
          • {warning} consider the effect of V-Order on performance before deciding to disable it [1]
            • {recommendation} test how V-Order affects the performance of data ingestion and queries before deciding to disable it [1]
          • via ALTER DATABASE CURRENT SET VORDER = OFF; [3]
        • {operation} check current status
          • via  SELECT name, is_vorder_enabled FROM sys.databases; [post]
      • {feature} [lakehouse] Load to Table
        • allows to load a single file or a folder of files to a table [6]
        • tables are always loaded using the Delta Lake table format with V-Order optimization enabled [6]
      • [Direct Lake semantic model] 
        • data is prepared for fast loading into memory [5]
          • makes less demands on capacity resources [5]
          • results in faster query performance [5]
            • because less memory needs to be scanned [5]

      References:
      [1] Microsoft Learn (2024) Fabric: Understand V-Order for Microsoft Fabric Warehouse [link]
      [2] Microsoft Learn (2024) Delta Lake table optimization and V-Order [link]
      [3] Microsoft Learn (2024) Disable V-Order on Warehouse in Microsoft Fabric [link]
      [4] Miles Cole (2024) To V-Order or Not: Making the Case for Selective Use of V-Order in Fabric Spark [link]
      [5] Microsoft Learn (2024) Understand storage for Direct Lake semantic models [link]
      [6] Microsoft Learn (2025] Fabric: Load to Delta Lake table [link]

      Resources:
      [R1] Serverless.SQL (2024) Performance Analysis of V-Ordering in Fabric Warehouse: On or Off?, by Andy Cutler [link]
      [R2] Redgate (2023 Microsoft Fabric: Checking and Fixing Tables V-Order Optimization, by Dennes Torres [link]
      [R3] Sandeep Pawar (2023) Checking If Delta Table in Fabric is V-order Optimized [link]
      [R4] Microsoft Learn (2025) Fabric: What's new in Microsoft Fabric? [link]

      Acronyms:
      MF - Microsoft Fabric
      Related Posts Plugin for WordPress, Blogger...

      About Me

      My photo
      Koeln, NRW, Germany
      IT Professional with more than 25 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.