Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes. For the latest information please consult the documentation (see the links below)!
Last updated: 10-Mar-2024
Dataflow (Gen2) Architecture [4] |
[Microsoft Fabric] Dataflow (Gen2)
- new generation of dataflows that resides alongside the Power BI Dataflow (Gen1) [2]
- brings new features and improved experience [2]
- similar to Dataflow Gen1 in Power BI [2]
- allows to
- extract data from various sources
- transform it using a wide range of transformation operations
- load it into a destination [1]
- {goal} provide an easy, reusable way to perform ETL tasks using Power Query Online [1]
- allows to promote reusable ETL logic
- ⇒ prevents the need to create more connections to the data source.
- offer a wide variety of transformations
- can be horizontally partitioned
- {component} Lakehouse
- used to stage data being ingested
- {component} Warehouse
- used as a compute engine and means to write back results to staging or supported output destinations faster
- {component} Mashup Engine
- extracts, transforms, or loads the data to staging or data destinations when either [4]
- Warehouse compute cannot be used [4]
- staging is disabled for a query [4]
- {operation} creating a dataflow
- can be created in a
- Data Factory workload
- Power BI workspace
- Lakehouse
- {operation} publishing a dataflow
- generates dataflow's definition
- ⇐ the program that runs once the dataflow is refreshed to produce tables in staging storage and/or output destination [4]
- used by the dataflow engine to generate an orchestration plan, manage resources, and orchestrate execution of queries across data sources, gateways, and compute engines, and to create tables in either the staging storage or data destination [4]
- saves changes and runs validations that must be performed in the background [2]
- {operation} refreshing a dataflow
- {operation} running a dataflow
- can be run
- manually
- on a refresh schedule
- as part of a Data Pipeline orchestration
- {feature} author dataflows with Power Query
- uses the full Power Query experience of Power BI dataflows [2]
- {feature} shorter authoring flow
- uses step-by-step for getting the data into your the dataflow [2]
- the number of steps required to create dataflows were reduced [2]
- a few new features were added to improve the experience [2]
- {feature} Auto-Save and background publishing
- changes made to a dataflow are autosaved to the cloud (aka draft version of the dataflow) [2]
- ⇐ without having to wait for the validation to finish [2]
- {functionality} save as draft
- stores a draft version of the dataflow every time you make a change [2]
- seamless experience and doesn't require any input [2]
- {concept} published version
- the version of the dataflow that passed validation and is ready to refresh [5]
- {feature} integration with data pipelines
- integrates directly with Data Factory pipelines for scheduling and orchestration [2]
- {feature} high-scale compute
- leverages a new, higher-scale compute architecture [2]
- improves the performance of both transformations of referenced queries and get data scenarios [2]
- creates both Lakehouse and Warehouse items in the workspace, and uses them to store and access data to improve performance for all dataflows [2]
- {feature} improved monitoring and refresh history
- integrate support for Monitoring Hub [2]
- Refresh History experience upgraded [2]
- {feature} get data via Dataflows connector
- supports a wide variety of data source connectors
- include cloud and on-premises relational databases
- {feature|planned} incremental refresh
- enables you to incrementally extract data from data sources, apply Power Query transformations, and load into various output destinations [5]
-
{feature|planned} Fast Copy
- enables large-scale data ingestion directly utilizing the pipelines Copy Activity capability [6]
- supports sources such Azure SQL Databases, CSV, and Parquet files in Azure Data Lake Storage and Blob Storage [6]
- significantly scales up the data processing capacity providing high-scale ELT capabilities [6]
- {feature|planned}Cancel refresh
- enables to cancel ongoing Dataflow Gen2 refreshes from the workspace items view [6]
- {feature} data destinations
- allows to
- specify an output destination
- separate ETL logic and destination storage [2]
- every tabular data query can have a data destination [3]
- available destinations
- Azure SQL databases
- Azure Data Explorer (Kusto)
- Fabric Lakehouse
- Fabric Warehouse
- Fabric KQL database
- a destination can be specified for every query individually [3]
- multiple different destinations can be used within a dataflow [3]
- connecting to the data destination is similar to connecting to a data source
- {limitation} functions and lists aren't supported
- {operation} creating a new table
- {default} table name has the same name as the query name.
- {operation} picking an existing table
- {operation} deleting a table manually from the data destination
- doesn't recreate the table on the next refresh [3]
- {operation} reusing queries from Dataflow Gen1
- {method} export Dataflow Gen1 query and import it into Dataflow Gen2
- export the queries as a PQT file and import them into Dataflow Gen2 [2]
- {method} copy and paste in Power Query
- copy the queries and paste them in the Dataflow Gen2 editor [2]
- automatic settings:
- {limitation} supported only for Lakehouse and Azure SQL database
- {setting} Update method replace:
- data in the destination is replaced at every dataflow refresh with the output data of the dataflow [3]
- {setting} Managed mapping:
- the mapping is automatically adjusted when republishing the data flow to reflect the change
- ⇒ doesn't need to be updated manually into the data destination experience every time changes occur [3]
- {setting} Drop and recreate table:
- on every dataflow refresh the table is dropped and recreated to allow schema changes
- {limitation} the dataflow refresh fails if any relationships or measures were added to the table [3]
- update methods
- {method} replace:
- on every dataflow refresh, the data is dropped from the destination and replaced by the output data of the dataflow.
- {limitation} not supported by Fabric KQL databases and Azure Data Explorer
- {method} append:
- on every dataflow refresh, the output data from the dataflow is appended (aka merged) to the existing data in the data destination table (aka upsert)
- staging
- {default} enabled
- allows to use Fabric compute to execute queries
- ⇐ enhances the performance of query processing
- the data is loaded into the staging location
- ⇐ an internal Lakehouse location accessible only by the dataflow itself
- [Warehouse] staging is required before the write operation to the data destination
- ⇐ improves performance
- {limitation} only loading into the same workspace as the dataflow is supported
- using staging locations can enhance performance in some cases
- disabled
- {recommendation} [Lakehouse] disable staging on the query to avoid loading twice into a similar destination
- ⇐ once for staging and once for data destination
- improves dataflow's performance
- {scenario} use a dataflow to load data into the lakehouse and then use a notebook to analyze the data [2]
- {scenario} use a dataflow to load data into an Azure SQL database and then use a data pipeline to load the data into a data warehouse [2]
- {benefit} extends data with consistent data, such as a standard date dimension table [1]
- {benefit} allows self-service users access to a subset of data warehouse separately [1]
- {benefit} optimizes performance with dataflows, which enable extracting data once for reuse, reducing data refresh time for slower sources [1]
- {benefit} simplifies data source complexity by only exposing dataflows to larger analyst groups [1]
- {benefit} ensures consistency and quality of data by enabling users to clean and transform data before loading it to a destination [1]
- {benefit} simplifies data integration by providing a low-code interface that ingests data from various sources [1]
- {limitation} not a replacement for a data warehouse [1]
- {limitation} row-level security isn't supported [1]
- {limitation} Fabric or Fabric trial capacity workspace is required [1]
Feature | Data flow Gen2 | Dataflow Gen1 |
Author dataflows with Power Query | ✓ | ✓ |
Shorter authoring flow | ✓ | |
Auto-Save and background publishing | ✓ | |
Data destinations | ✓ | |
Improved monitoring and refresh history | ✓ | |
Integration with data pipelines | ✓ | |
High-scale compute | ✓ | |
Get Data via Dataflows connector | ✓ | ✓ |
Direct Query via Dataflows connector | ✓ | |
Incremental refresh | ✓* | |
Fast Copy | ✓* | |
Cancel refresh | ✓* | |
AI Insights support | ✓ |
ETL - Extract, Transform, Load
PQO - Power Query Online
PQT - Power Query Template
References:
[1] Microsoft Learn: Fabric (2023) Ingest data with Microsoft
Fabric (link)
[2] Microsoft Learn: Fabric (2023) Getting from Dataflow
Generation 1 to Dataflow Generation 2 (link)
[3] Microsoft Learn: Fabric (2023) Dataflow Gen2 data
destinations and managed settings (link)
[4] Microsoft Learn: Fabric (2023) Dataflow Gen2 pricing for
Data Factory in Microsoft Fabric (link)
[5] Microsoft Learn: Fabric (2023) Save a draft of your
dataflow (link)
[6] Microsoft Learn: Fabric (2023) What's new and planned for
Data Factory in Microsoft Fabric (link)
Resources:
[R1] Arshad Ali & Bradley Schacht
(2024) Learn Microsoft Fabric (link)
[R2] Microsoft Learn: Fabric (2023) Data Factory
limitations overview (link)
[R3] Microsoft Fabric Blog (2023) Data Factory Spotlight: Dataflow
Gen2, by Miguel Escobar (link)
[R4] Microsoft Learn: Fabric (2023) Dataflow Gen2
connectors in Microsoft Fabric
(link)
[R5] Microsoft Learn: Fabric (2023) Pattern to
incrementally amass data with Dataflow Gen2 (link)
[R6] Fourmoo (2004) Microsoft Fabric – Comparing Dataflow Gen2 vs
Notebook on Costs and usability, by Gilbert Quevauvilliers (link)
[R7] Microsoft Learn: Fabric (2023) A guide to Fabric
Dataflows for Azure Data Factory Mapping Data Flow users (link)
[R8] Microsoft Learn: Fabric (2023) Quickstart: Create your
first dataflow to get and transform data (link)
[R9] Microsoft Learn: Fabric (2023) Microsoft Fabric decision
guide: copy activity, dataflow, or Spark (link)
[R10] Microsoft Fabric Blog (2023) Dataflows Gen2 data
destinations and managed settings, by Miquella de Boer (link)
[R11] Microsoft Fabric Blog (2023) Service principal support
to connect to data in Dataflow, Datamart, Dataset and Dataflow Gen 2,
by Miquella de Boer (link)
[R12] Chris Webb's BI Blog (2023) Fabric Dataflows Gen2:
To Stage Or Not To Stage? (link)
[R13] Power BI Tips (2023) Let's Learn Fabric ep.7: Fabric
Dataflows Gen2 (link)