09 December 2024

🏭🗒️Microsoft Fabric: Microsoft Fabric [Notes]

Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes. For the latest information please consult the documentation (see the links below)! 

Last updated: 8-Dec-2024

Microsoft Fabric 

  • {goal}complete (end-to-end) analytics platform [6]
    • {characteristic} unified
      • {objective} provides a single, integrated environment for all the organization
        • {benefit} data professionals and the business users can collaborate on data projects [5] and solutions
    • {characteristic}serverless SaaS model (aka SaaS-ified)
      • {objective} provisioned automatically with the tenant [6]
      • {objective} highly scalable [5]
      • {objective} cost-effectiveness [5]
      • {objective} accessible 
        • ⇐ from anywhere with an internet connection [5]
      • {objective} continuous updates
        • ⇐ provided by Microsoft
      • {objective} continuous maintenance 
        • ⇐ provided by Microsoft
      • provides a set of integrated services that enable to ingest, store, process, and analyze data in a single environment [5]
    • {objective} secure
    • {objective} governed
  • {goal} lake-centric
    • {characteristic} OneLake-based
      • all workloads automatically store their data in the OneLake workspace folders [6]
      • all the data is organized in an intuitive hierarchical namespace [6]
      • data is automatically indexed [6]
      • provides a set of features 
        • discovery
        • MIP labels
        • lineage
        • PII scans
        • sharing
        • governance
        • compliance
    • {characteristic} one copy
      • available for all computes 
      • all compute engines store their data automatically in OneLake
        •  the data is stored in a (single) common format
          •  delta parquet file format
            • open standards format
            • the storage format for all tabular data in Microsoft Fabric 
        • ⇐ the data is directly accessible by all the engines [6]
          • ⇐ no import/export needed
      • all compute engines are fully optimized to work with Delta Parquet as their native format [6]
      • a shared universal security model is enforced across all the engines [6]
    • {characteristic} open at every tier
  • {goal} empowering
    • {characteristic} intuitive
    • {characteristic} built into M365
    • {characteristic} insight to action
  • {goal} AI-powered
    • {characteristic} Copilot accelerated 
    • {characteristic} ChatGPT enabled
    • {characteristic} AI-driven insights
  •  complete analytics platform
    • addresses the needs of all data professionals and business users who target harnessing the value of data 
  • {feature} scales automatically
    • the system automatically allocates an appropriate number of compute resources based on the job size
    • the cost is proportional to total resource consumption, rather than size of cluster or number of resources allocated 
    •  jobs in general complete faster (and usually, at less overall cost)
      • ⇒ not need to specify cluster sizes
  • natively supports 
    • Spark
    • data science
    • log-analytics
    • real-time ingestion and messaging
    • alerting
    • data pipelines, and 
    • Power BI reporting 
    • interoperability with third-party services 
      • from other vendors that support the same open 
  • data virtualization mechanisms 
    • {feature} mirroring [notes]
    • {feature} shortcuts [notes]
      • allow users to reference data without copying it
      • {benefit} make other domain data available locally without the need for copying data
  • {feature} tenant (aka Microsoft Fabric tenantMF tenant)
    • a single instance of Fabric for an organization that is aligned with a Microsoft Entra ID
    • can contain any number of workspaces
  • {feature} workspaces
    • {definition} a collection of items that brings together different functionality in a single environment designed for collaboration
    • associated with a domain [3]
  • {feature} domains [notes]
    • {definition} a way of logically grouping together data in an organization that is relevant to a particular area or field [1]
    • subdomains
      • a way for fine tuning the logical grouping data under a domain [1]
        • subdivisions of a domain

Acronyms:
API - Application Programming Interface
M365 - Microsoft 365
MF - Microsoft Fabric
PII - Personal Identification Information
SaaS - software-as-a-service

Resources:
[1] Microsoft Learn (2023) Administer Microsoft Fabric [link]
[2] Microsoft Learn: Fabric (2024) Governance overview and guidance [link]
[3] Microsoft Learn: Fabric (2023) Fabric domains [link]
[4] Establishing Data Mesh architectural pattern with Domains and OneLake on Microsoft Fabric, by Maheswaran Arunachalam [link]
[5] Microsoft Learn: Fabric (2024) Introduction to end-to-end analytics using Microsoft Fabric [link]
[6] 
Microsoft Fabric (2024) Fabric Analyst in a Day [course notes]

🏭🗒️Microsoft Fabric: Delta Lake [Notes]

Disclaimer: This is work in progress intended to consolidate information from various sources and may deviate from them. Please consult the sources for the exact content!
Last updated: 1-Apr-2024

Delta Lake 

  • {definition} an optimized open source storage layer that runs on top of a data lake [1]
    • the default storage format in a Fabric lakehouse [1]
    • stores data in Parquet file format
    • is a variant of log-structured files 
    • initially developed at Databricks
    • fully compatible with Apache Spark APIs [1]
  • {characteristic} high reliability
  • {characteristic} secure
  • {characteristic} performant
    • provides low latency
  • {feature} data indexing
    • indexes are created and maintained on the ingested data  [1]
      • increases the querying speed significantly [1]
  • {feature} data skipping
    • file statistics are maintains so that data subsets relevant to the query are used instead of entire tables - this partition pruning avoids processing data that is not relevant to the query [1]
    • helps complex queries to read only the relevant subsets to fulfil query [1]
  • {feature} multidimensional clustering
    • uses the Z-ordering algorithm
    • enables data skipping this.
  • {feature} compaction
    • compacts or combines multiple small files into more efficient larger ones [1]
      • speeds up query performance
    • storing and accessing  small files can be processing-intensive, slow and inefficient from a storage utilization perspective [1]
  • {feature} data caching
    • highly accessed data is automatically cached to speed access for queries
  • {feature} ACID transactions
    • "all or nothing" ACID transaction approach is employed to prevent data corruption
      • ⇐ partial or failed writes risk corrupting the data [1]
  • {feature} snapshot isolation (aka SI)
    • ensures that multiple writers can write to a dataset simultaneously without interfering with jobs reading the dataset [1]
  • {feature} schema enforcement
    • data can be stored using a schema
    • {benefit} helps ensure data integrity for ingested data by providing schema enforcement [1]
      • potential data corruption with incorrect or invalid schemas is avoided [1]
  • {feature} checkpointing 
    • employed to provide a robust exactly once delivery semantic [1]
    • {benefit} ensures that data is neither missed nor repeated erroneously [1]
  • {feature} UPSERTS and DELETES support
    • provide a more convenient way of dealing with such changes  [1]
  • {feature} unified streaming and batch data processing
    • both batch and streaming data are handled via a direct integration with Structured Streaming for low latency updates  [1]
      • {benefit} simplifies the system architecture  [1]
      • {benefit} results in shorter time from data ingest to query result  [1]
    • can concurrently write batch and streaming data to the same data table [1]
  • {feature} schema evolution
    •  schema is inferred from input data
    • {benefit} reduces the effort for dealing with schema impact of changing business needs at multiple levels of the pipeline/data stack [1]
  • {feature} scalable metadata handling
  • {feature} predictive optimization
    • removes the need to manually manage maintenance operations for delta tables [8]
    • {enabled} automatically identifies tables that would benefit from maintenance operations, and then optimizes their storage
  • {feature} historical retention
    • {default} maintains a history of all changes made [4]
    • {benefit} enhanced regulatory compliance and audit
    • {recommendation} keep historical data only for a certain period of time to reduce storage costs [4]
  • {feature} time travel
    • {benefit} support for data rollback 
    • {benefit} lets users query point-in-time snapshots [5]
  • {best practice} all writes and reads should go through Delta Lake [1]
    • {benefit} ensure consistent overall behavior [1]
  • {best practice}run OPTIMIZE Regularly
    • {exception} should not be run on base or staging tables [1]
  • {best practice} run VACUUM Regularly
    • cleans up expired snapshots that are no longer required [1]
  • {best practice} use MERGE INTO to batch changes 
    • {benefit} allows to efficiently rewrite queries to implement updates to archived data and compliance workflows [5]
  • {best practice} use DELETE commands
    • {benefit} ensures proper progression of the change [1] 
    • {warning} manually deleting files from the underlying storage is likely to break the table [1]

Acronyms:
ACID - atomicity, consistency, isolation, durability

References:
[1] Azure Databricks (2023) Delta Lake on Azure Databricks
[2] Josep Aguilar-Saborit et al, POLARIS: The Distributed SQL Engine in Azure Synapse, PVLDB, 13(12), 2020 (link)
[3] Josep Aguilar-Saborit et al, Extending Polaris to Support Transactions 2024
[4] Implement medallion lakehouse architecture in Microsoft Fabric (link)
[5] Michael Armbrust et al (2020) Delta Lake: High-Performance ACID Table Storage over Cloud Object Stores, Proceedings of the VLDB Endowment13(12) (link)
[6] 
Bennie Haelen & Dan Davis (2024) Delta Lake: Up and Running Modern Data Lakehouse Architectures with Delta Lake

08 December 2024

🏭🗒️Microsoft Fabric: Shortcuts [Notes]

Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes. For the latest information please consult the documentation (see the links below)! 

Last updated: 8-Dec-2024

[Microsoft Fabric] Shortcut

  • {def} object that points to other internal or external storage location (aka shortcut) [1] and that can be used for data access
    • {goal} unifies existing data without copying or moving it [2]
      • ⇒ data can be used multiple times without being duplicated [2]
      • {benefit} helps to eliminate edge copies of data [1]
      • {benefit} reduces process latency associated with data copies and staging [1]
    • is a mechanism that allows to unify data across domains, clouds, and accounts through a namespace [1]
      • ⇒ allows creating a single virtual data lake for the entire enterprise [1]
      • ⇐ available in all Fabric experiences [1]
      • ⇐ behave like symbolic links [1]
    • independent object from the target [1]
    • appear as folder [1]
    • can be used by workloads or services that have access to OneLake [1]
    • transparent to any service accessing data through the OneLake API [1]
    • can point to 
      • OneLake locations
      • ADLS Gen2 storage accounts
      • Amazon S3 storage accounts
      • Dataverse
      • on-premises or network-restricted locations via PDF 
  • {capability} create shortcut to consolidate data across artifacts or workspaces, without changing data's ownership [2]
  • {capability} data can be compose throughout OneLake without any data movement [2]
  • {capability} allow instant linking of data already existing in Azure and in other clouds, without any data duplication and movement [2]
    • ⇐ makes OneLake the first multi-cloud data lake [2]
  • {capability} provides support for industry standard APIs
    • ⇒ OneLake data can be directly accessed via shortcuts by any application or service [2]
  • {operation} creating a shortcut
    • can be created in 
      • lakehouses
      • KQL databases
        • ⇐ shortcuts are recognized as external tables [1]
    • can be created via 
      • Fabric UI 
      • REST API
    • can be created across items [1]
      • the item types don't need to match [1]
        • e.g. create a shortcut in a lakehouse that points to data in a data warehouse [1]
    • [lakehouse] tables folder
      • represents the managed portion of the lakehouse 
        • shortcuts can be created only at the top level [1]
          • ⇒ shortcuts aren't supported in other subdirectories [1]
        • if shortcut's target contains data in the Delta\Parquet format, the lakehouse automatically synchronizes the metadata and recognizes the folder as a table [1]
    • [lakehouse] files folder
      • represents the unmanaged portion of the lakehouse [1]
      • there are no restrictions on where shortcuts can be created [1]
        • ⇒ can be created at any level of the folder hierarchy [1]
        • ⇐table discovery doesn't happen in the Files folder [1]
  • {operation} renaming a shortcut
  • {operation} moving a shortcut
  • {operation} deleting a shortcut 
    • doesn't affect the target [1]
      • ⇐ only the shortcut object is deleted [1]
        • ⇐ the shortcut target remains unchanged [1]
    • shortcuts don't perform cascading deletes [1]
    • moving, renaming, or deleting a target path can break the shortcut [1]
  • {operation} delete file/folder
    • file or folder within a shortcut can be deleted when the permissions in the shortcut target allows it [1]
  • {permissions} users must have permissions in the target location to read the data [1]
    • when a user accesses data through a shortcut to another OneLake location, the identity of the calling user is used to authorize access to the data in the target path of the shortcut [1]
    • when accessing shortcuts through Power BI semantic models or T-SQL, the calling user’s identity is not passed through to the shortcut target [1]
      •  the calling item owner’s identity is passed instead, delegating access to the calling user [1]
    • OneLake manages all permissions and credentials
  • {feature} shortcut caching 
    • {def} mechanism used to reduce egress costs associated with cross-cloud data access [1]
      • when files are read through an external shortcut, the files are stored in a cache for the Fabric workspace [1]
        • subsequent read requests are served from cache rather than the remote storage provider [1]
        • cached files have a retention period of 24 hours
        • each time the file is accessed the retention period is reset [1]
        • if the file in remote storage provider is more recent than the file in the cache, the request is served from remote storage provider and the updated file will be stored in cache [1]
        • if a file hasn’t been accessed for more than 24hrs it is purged from the cache [1]
    • {restriction} individual files greater than 1GB in size are not cached [1]
    • {restriction} only GCS, S3 and S3 compatible shortcuts are supported [1]
  • {limitation} maximum number of shortcuts [1] 
    • per Fabric item: 100,000
    • in a single OneLake path: 10
    • direct shortcuts to shortcut links: 5
  • {limitation} ADLS and S3 shortcut target paths can't contain any reserved characters from RFC 3986 section 2.2 [1]
  • {limitation} shortcut names, parent paths, and target paths can't contain "%" or "+" characters [1]
  • {limitation} shortcuts don't support non-Latin characters[1]
  • {limitation} Copy Blob API not supported for ADLS or S3 shortcuts[1]
  • {limitation} copy function doesn't work on shortcuts that directly point to ADLS containers
    • {recommended} create ADLS shortcuts to a directory that is at least one level below a container [1]
  • {limitation} additional shortcuts can't be created inside ADLS or S3 shortcuts [1]
  • {limitation} lineage for shortcuts to Data Warehouses and Semantic Models is not currently available[1]
  • {limitation} it may take up to a minute for the Table API to recognize new shortcuts [1]
References:
[1] Microsoft Fabric (2024) OneLake shortcuts [link]
[2] Microsoft Fabric (2024) Fabric Analyst in a Day [course notes]

Acronyms:
ADLS - Azure Data Lake Storage
AWS - Amazon Web Services
GCS - Google Cloud Storage
KQL - Kusto Query Language
OPDG - on-premises data gateway

07 December 2024

🏭 💠Data Warehousing: Microsoft Fabric (Part IV: SQL Databases for OLTP scenarios) [new feature]

Data Warehousing Series
Data Warehousing Series

One interesting announcements at Ignite is the availability in public preview of SQL databases in Microsoft Fabric, "a versatile and developer-friendly transactional database built on the foundation of Azure SQL database". With this Fabric can address besides OLAP also OLTP scenarios, evolving thus from analytics to a data platform [1]. According to the announcement, besides the AI-optimized architectural aspects, the feature makes the SQL Azure simple, autonomous and secure by design [1], and these latest aspects are considered in this post. 

Simplicity revolves around the deployment and configuration of databases, the creation of a new database requiring giving a name and the database is created in seconds [1]. It’s a considerable improvement compared with the relatively complex setup needed for on-premise configurations, though sometimes more flexibility in configuration is needed upfront or over database’s lifetime. To get a database ready for testing one can import a sample database or get specific data via data flows and/or pipelines [1]. As development tools one can use Visual Studio Code or SSMS [1], and probably more tools will be available in time.

The integration with both GitHub and Azure DevOps allows to configure each database under source control, which is needed for many scenarios especially when multiple resources make changes to the database objects [1]. Frankly, that’s mainly important during the development phase, respectively in scenarios in which multiple people make in parallel changes to the logic. It will be interesting to see how much overhead or challenges the feature adds to development and how smoothly everything works together!

The most important aspect for many solutions is the replication of data in near-real time to the (open-source) delta parquet format in OneLake and thus making the data available for analytics almost immediately [1]. Probably, from this aspect many cloud-based applications can benefit, even if the performance might not be as good as in other well-established architectures. However, there are many other scenarios in which one needs to maintain and use data for OLTP/OLAP purposes. This invites adequate testing and a good weighting of the advantages and disadvantages involved. 

A SQL database is a native item in Fabric, and therefore it utilizes Fabric capacity units like other Fabric workloads [1]. One can use the Fabric SKU estimator (still in private preview) to estimate the costs [2], though it will be interesting to see how cost-effective the solutions are. Probably, especially when the infrastructure is already available outside of Fabric, it will be easier and cost-effective to use the mirroring functionality. One should test and have a better estimator before moving blindly from the existing infrastructure to Fabric. 

SQL databases in Fabric are autonomous by design, while allowing to get the best performance and availability by default [1]. High availability is reached through zone redundancy, while performance is achieved by scaling automatically the storage and compute to accommodate the workloads [1]. The auto-optimization capability is achieved with the help of the latest Intelligent Query Processing (IQP) enhancements, respectively the creation of missing indexes to improve query performance [1]. It will be interesting to see how the whole process works, given that the maintenance of indexes usually involves some challenges (e.g. identifying covering indexes, indexes needed only for temporary workloads, duplicated indexes).

SQL databases in Fabric are automatically configured for high availability with zone redundancy, while storage and compute scale automatically to accommodate the user workload [1]. The database is auto-optimized through the latest IQP enhancements while the system creates any missing indexes to improve query performance. All data is replicated to OneLake by default [1]. Finally, the database always receives the latest security updates with auto-patching, while automatic backups help in disaster recovery scenarios  [1], which can be of real help for database administrators. 

References:

[1] Microsoft Fabric Updates Blog (2024) Announcing SQL database in Microsoft Fabric Public Preview [link

[2] Microsoft Fabric Updates Blog (2024) Announcing New Recruitment for the Private Preview of Microsoft Fabric SKU Estimator [link]


10 November 2024

🏭🗒️Microsoft Fabric: Data Mesh [Notes]

Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes. For the latest information please consult the documentation (see the links below)! 

Last updated: 23-May-2024

[Microsoft Fabric] Data Mesh
  • {definition} a type of decentralized data architecture that organizes data based on different business domains [2]
    •   a centrally managed network of decentralized data products
  • {concept} landing zone
    • typically a subscription that needs to be governed by a common policy [7]
      • {downside} creating one landing zone for every project can lead to too many landing zones to manage
        • {alternative} landing zones based on a business domain [7] 
    •  resources must be managed efficiently in a way that each team is given access to only their resources [7]
      •   shared resources might be need with separate management and common access to all [7]
    • need to be linked together into a mesh
      • via peer-to-peer networks
  • {concept} connectivity hub
  • {feature} resource group
    • {definition} a container that holds related resources for an Azure solution 
    • can be associated with a data product
      • when the data product becomes obsolete, the resource group can be deleted [7]
  • {feature} subscription
    • {definition} a logical unit of Azure services that are linked to an Azure account
    • can be associated as a landing zone governed by a policy [7]
  • {feature} tenant (aka Microsoft Fabric tenantMF tenant)
    • a single instance of Fabric for an organization that is aligned with a Microsoft Entra ID
    • can contain any number of workspaces
  • {feature} workspaces
    • {definition} a collection of items that brings together different functionality in a single environment designed for collaboration
    • associated with a domain [3]
  • {feature} domains
    • {definition} a way of logically grouping together data in an organization that is relevant to a particular area or field [1]
    • some tenant-level settings for managing and governing data can be delegated to the domain level [2]
  • {feature} subdomains
    • a way for fine tuning the logical grouping data under a domain [1]
    • subdivisions of a domain
  • {concept} deployment template

References
[1] Microsoft Learn: Fabric (2023) Fabric domains (link)
[2] Establishing Data Mesh architectural pattern with Domains and OneLake on Microsoft Fabric, by Maheswaran Arunachalam (link
[3] Data mesh: A perspective on using Azure Synapse Analytics to build data products, by Amanjeet Singh (link)
[4] Zhamak Dehghani (2021) Data Mesh: Delivering Data-Driven Value at Scale
[5] Marthe Mengen (2024) How do you set up your Data Mesh in Microsoft Fabric? (link)
[6] Administering Microsoft Fabric - Considering Data Products vs Domains vs Workspaces, by Paul Andrew (link)
[7] Aniruddha Deswandikar (2024) Engineering Data Mesh in Azure Cloud

🏭🗒️Microsoft Fabric: Data Warehouse [Notes]

Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes. For the latest information please consult the documentation (see the links below)! 

Last updated: 11-Mar-2024

Warehouse vs SQL analytics endpoint in Microsoft Fabric
Warehouse vs SQL analytics endpoint in Microsoft Fabric [3]

[Microsoft Fabric] Data Warehouse

  • highly available relational data warehouse that can be used to store and query data in the Lakehouse
    • supports the full transactional T-SQL capabilities 
    • modernized version of the traditional data warehouse
  • unifies capabilities from Synapse Dedicated and Serverless SQL Pools
  • modernized with key improvements
  • resources are managed elastically to provide the best possible performance
    • ⇒ no need to think about indexing or distribution
    • a new parser gives enhanced CSV file ingestion time
    • metadata is now cached in addition to data
    • improved assignment of compute resources to milliseconds
    • multi-TB result sets are streamed to the client
  • leverages a distributed query processing engine
    • provides with workloads that have a natural isolation boundary [3]
      • true isolation is achieved by separating workloads with different characteristics, ensuring that ETL jobs never interfere with their ad hoc analytics and reporting workloads [3]
  • {operation} data ingestion
    • involves moving data from source systems into the data warehouse [2]
      • the data becomes available for analysis [1]
    • via Pipelines, Dataflows, cross-database querying, COPY INTO command
    • no need to copy data from the lakehouse to the data warehouse [1]
      • one can query data in the lakehouse directly from the data warehouse using cross-database querying [1]
  • {operation} data storage
    • involves storing the data in a format that is optimized for analytics [2]
  • {operation} data processing
    • involves transforming the data into a format that is ready for consumption by analytical tools [1]
  • {operation} data analysis and delivery
    • involves analyzing the data to gain insights and delivering those insights to the business [1]
  • {operation} designing a warehouse (aka warehouse design)
    • standard warehouse design can be used
  • {operation} sharing a warehouse (aka warehouse sharing)
    • a way to provide users read access to the warehouse for downstream consumption
      • via SQL, Spark, or Power BI
    • the level of permissions can be customized to provide the appropriate level of access
  • {feature} mirroring 
    • provides a modern way of accessing and ingesting data continuously and seamlessly from any database or data warehouse into the Data Warehousing experience in Fabric
      • any database can be accessed and managed centrally from within Fabric without having to switch database clients
      • data is replicated in a reliable way in real-time and lands as Delta tables for consumption in any Fabric workload
  • {concept}SQL analytics endpoint 
    • a warehouse that is automatically generated from a Lakehouse in Microsoft Fabric [3]
  • {concept}virtual warehouse
    • can containing data from virtually any source by using shortcuts [3]
  • {concept} cross database querying 
    • enables to quickly and seamlessly leverage multiple data sources for fast insights and with zero data duplication [3]
References:
[1] Microsoft Learn: Fabric (2023) Get started with data warehouses in Microsoft Fabric (link
[2] Microsoft Learn: Fabric (2023) Microsoft Fabric decision guide: choose a data store (link)
[3] Microsoft Learn: Fabric (2024) What is data warehousing in Microsoft Fabric? (link)
[4] Microsoft Learn: Fabric (2023) Better together: the lakehouse and warehouse (link)

Resources:
[1] Microsoft Learn: Fabric (2023) Data warehousing documentation in Microsoft Fabric (link)


16 October 2024

🧭💹Business Intelligence: Perspectives (Part XVIII: There’s More to Noise)

Business Intelligence Series
Business Intelligence Series

Visualizations should be built with an audience's characteristics in mind! Upon case, it might be sufficient to show only values or labels of importance (minima, maxima, inflexion points, exceptions, trends), while other times it might be needed to show all or most of the values to provide an accurate extended perspective. It even might be useful to allow users switching between the different perspectives to reduce the clutter when navigating the data or look at the patterns revealed by the clutter. 

In data-based storytelling are typically shown the points, labels and further elements that support the story, the aspects the readers should focus on, though this approach limits the navigability and users’ overall experience. The audience should be able to compare magnitudes and make inferences based on what is shown, and the accurate decoding shouldn’t be taken as given, especially when the audience can associate different meanings to what’s available and what’s missing. 

In decision-making, selecting only some well-chosen values or perspectives to show might increase the chances for a decision to be made, though is this equitable? Cherry-picking may be justified by the purpose, though is in general not a recommended practice! What is not shown can be as important as what is shown, and people should be aware of the implications!

One person’s noise can be another person’s signal. Patterns in the noise can provide more insight compared with the trends revealed in the "unnoisy" data shown! Probably such scenarios are rare, though it’s worth investigating what hides behind the noise. The choice of scale, the use of special types of visualizations or the building of models can reveal more. If it’s not possible to identify automatically such scenarios using the standard software, the users should have the possibility of changing the scale and perspective as seems fit. 

Identifying patterns in what seems random can prove to be a challenge no matter the context and the experience in the field. Occasionally, one might need to go beyond the general methods available and statistical packages can help when used intelligently. However, a presenter’s challenge is to find a plausible narrative around the findings and communicate it further adequately. Additional capabilities must be available to confirm the hypotheses framed and other aspects related to this approach.

It's ideal to build data models and a set of visualizations around them. Most probable some noise may be removed in the process, while other noise will be further investigated. However, this should be done through adjustable visual filters because what is removed can be important as well. Rare events do occur, probably more often than we are aware and they may remain hidden until we find the right perspective that takes them into consideration. 

Probably, some of the noise can be explained by special events that don’t need to be that rare. The challenge is to identify those parameters, associations, models and perspectives that reveal such insights. One’s gut feeling and experience can help in this direction, though novel scenarios can surprise us as well.

Not in every set of data one can find patterns, respectively a story trying to come out. Whether we can identify something worth revealing depends also on the data available at our disposal, respectively on whether the chosen data allow identifying significant patterns. Occasionally, the focus might be too narrow, too wide or too shallow. It’s important to look behind the obvious, to look at data from different perspectives, even if the data seems dull. It’s ideal to have the tools and knowledge needed to explore such cases and here the exposure to other real-life similar scenarios is probably critical!

𖣯Strategic Management: Strategic Perspectives (Part II: The Elephant in the Room)

Strategic Management Perspectives
Strategic Management Perspectives

There’s an ancient parable about several blind people who touch a shape they had never met before, an elephant, and try to identify what it is. The elephant is big, more than each person can sense through direct experience, and people’s experiences don’t correlate to the degree that they don’t trust each other, the situation escalating upon case. The moral of the parable is that we tend to claim (absolute) truths based on limited, subjective experience [1], and this can easily happen in business scenarios in which each of us has a limited view of the challenges we are facing individually and as a collective. 

The situation from the parable can be met in business scenarios, when we try to make sense of the challenges we are faced with, and we get only a limited perspective from the whole picture. Only open dialog and working together can get us closer to the solution! Even then, the accurate depiction might not be in sight, and we need to extrapolate the unknown further.  

A third-party consultant with experience might be the right answer, at least in theory, though experience and solutions are relative. The consultant might lead us in a direction, though from this to finding the answer can be a long way that requires experimentation, a mix of tactics and strategies that change over time, more sense-making and more challenges lying ahead. 

We would like a clear answer and a set of steps that lead us to the solution, though the answer is as usual, it depends! It depends on the various forces/drivers that have the biggest impact on the organization, on the context, on the organization’s goals, on the resources available directly or indirectly, on people’s capabilities, the occurrences of external factors, etc. 

In many situations the smartest thing to do is to gather information, respectively perspectives from all the parties. Tools like brainstorming, SWOT/PESTLE analysis or scenario planning can help in sense-making to identify the overall picture and where the gravity point lies. For some organizations the solution will be probably a new ERP system, or the redesign of some processes, introduction of additional systems to track quality, flow of material, etc. 

A new ERP system will not necessarily solve all the issues (even if that’s the expectation), and some organizations just try to design the old processes into a new context. Process redesign in some areas can be upon case a better approach, at least as primary measure. Otherwise, general initiatives focused on quality, data/information management, customer/vendor management, integrations, and the list remains open, can provide the binder/vehicle an organization needs to overcome the current challenges.

Conversely, if the ERP or other strategical systems are 10-20 years old, then there’s indeed an elephant in the room! Moreover, the elephant might be bigger than we can chew, and other challenges might lurk in its shadow(s). Everything is a matter of perspective with no apparent unique answer. Thus, finding an acceptable solution might lurk in the shadow of the broader perspective, in the cumulated knowledge of the people experiencing the issues, respectively in some external guidance. Unfortunately, the guides can be as blind as we are, making limited or no important impact. 

Sometimes, all it’s needed is a leap of faith corroborated with a set of tactics or strategies kept continuously in check, redirected as they seem fit based on the knowledge accumulated and the challenges ahead. It helps to be aware of how others approached the same issues. Unfortunately, there’s no answer that works for all! In this lies the challenge, in identifying what works and makes sense for us!

Previous Post <<||>> Next Post

Resources:
[1] Wikipedia (2024) Blind men and an elephant [link]


15 October 2024

🗄️Data Management: Data Governance (Part III: Taming the Complexity)

Data Management Series
Data Management Series

The Chief Data Officer (CDO) or the “Head of the Data Team” is one of the most challenging jobs because is more of a "political" than a technical role. It requires the ideal candidate to be able to throw and catch curved balls almost all the time, and one must be able to play ball with all the parties having an interest in data (aka stakeholders). It’s a full-time job that requires the combination of management and technical skillsets, and both are important! The focus will change occasionally in one direction more than in the other, with important fluctuations. 

Moreover, even if one masters the technical and managerial aspects, the combination of the two gives birth to situations that require further expertise – applied systems thinking being probably the most important. This, also because there are so many points of failure that it's challenging to address all the important causes. Therefore, it’s critical to be a system thinker, to have an experienced team and make use adequately of its experience! 

In a complex word, in which even the smallest constraint or opportunity can have an important impact especially when it’s involved in the early stages of the processes taking place in organizations. It relies on the manager’s and team’s skillset, their inspiration, the way the business reacts to the tasks involved and probably many other aspects that make things work. It takes considerable effort until the whole mechanism works, and even more time to make things work efficiently. The best metaphor is probably the one of a small combat team in which everybody has their place and skillset in the mechanism, independently if one talks about strategy, tactics or operations. 

Unfortunately, building such teams takes time, and the more people are involved, the more complex this endeavor becomes. The manager and the team must meet somewhere in the middle in what concerns the philosophy, the execution of the various endeavors, the way of working together to achieve the same goals. There are multiple forces pulling in all directions and it takes time until one can align the goals, respectively the effort. 

The most challenging forces are the ones between the business and the data team, respectively the business and data requirements, forces that don’t necessarily converge. Working in small organizations, the two parties have in theory more challenges to overcome the challenges and a team’s experience can weight a lot in the process, though as soon the scale changes, the number of challenges to be overcome changes exponentially (there are however different exponential functions in which the basis and exponent make the growth rapid). 

In big organizations can appear other parties that have the same force to pull the weight in one direction or another. Thus, the political aspects become more complex to the degree that the technologies must follow the political decisions, with all the positive and negative implications deriving from this. As comparison, think about the challenges from moving from two to three or more moving bodies orbiting each other, resulting in a chaotic dynamical system for most initial conditions. 

Of course, a business’ context doesn’t have to create such complexity, though when things are unchecked, when delays in decision-making as well as other typical events occur, when there’s no structure, strategy, coordinated effort, or any other important components, the chances for chaotic behavior are quite high with the pass of time. This is just a model to explain real life situations that seem similar on the surface but prove to be quite complex when diving deeper. That’s probably why a CDO’s role as tamer of complexity is important and challenging!

Previous Post <<||>> Next Post

11 October 2024

🧭Business Intelligence: Perspectives (Part XVII: Creating Value for Organizations)

Business Intelligence Series
Business Intelligence Series

How does one create value for an organization in BI area? This should be one of the questions the BI professional should ask himself and eventually his/her colleagues on a periodic basis because the mere act of providing reports and good-looking visualizations doesn’t provide value per se. Therefore, it’s important to identify the critical to success and value drivers within each area!

One can start with the data, BI or IT strategies, when organizations invest the time in their direction, respectively with the considered KPIs and/or OKRs defined, and hopefully the organizations already have something similar in place! However, these are just topics that can be used to get a bird view over the overall landscape and challenges. It’s advisable to dig deeper, especially when the strategic, tactical and operational plans aren’t in sync, and let’s be realistic, this happens probably in many organizations, more often than one wants to admit!

Ideally, the BI professional should be able to talk with the colleagues who could benefit from having a set of reports or dashboards that offer a deeper perspective into their challenges. Talking with each of them can be time consuming and not necessarily value driven. However, giving each team or department the chance to speak their mind, and brainstorm what can be done, could in theory bring more value. Even if their issues and challenges should be reflected in the strategy, there’s always an important gap between the actual business needs and those reflected in formal documents, especially when the latter are not revised periodically. Ideally, such issues should be tracked back to a business goal, though it’s questionable how much such an alignment is possible in practice. Exceptions will always exist, no matter how well structured and thought a strategy is!

Unfortunately, this approach also involves some risks. Despite their local importance, the topics raised might not be aligned with what the organization wants, and there can be a strong case against and even a set of negative aspects related to this. However, talking about the costs involved by losing an opportunity can hopefully change the balance favorably. In general, transposing the perspective of issues into the area of their associated cost for the organization has (hopefully) the power to change people’s minds.

Organizations tend to bring forward the major issues, addressing the minor ones only after that, this having the effect that occasionally some of the small issues increase in impact when not addressed. It makes sense to prioritize with the risks, costs and quick wins in mind while looking at the broader perspective! Quick wins are usually addressed at strategic level, but apparently seldom at tactical and operational level, and at these levels one can create the most important impact, paving the way for other strategic measures and activities.

The question from the title is not limited only to BI professionals - it should be in each manager and every employee’s mind. The user is the closest to the problems and opportunities, while the manager is the one who has a broader view and the authority to push the topic up the waiting list. Unfortunately, the waiting lists in some organizations are quite big, while not having a good set of requests on the list might pinpoint that issues might exist in other areas!  

BI professionals and organizations probably know the theory well but prove to have difficulties in combining it with praxis. It’s challenging to obtain the needed impact (eventually the maximum effect) with a minimum of effort while addressing the different topics. Sooner or later the complexity of the topic kicks in, messing things around!

17 September 2024

#️⃣Software Engineering: Mea Culpa (Part V: All-Knowing Developers are Back in Demand?)

Software Engineering Series

I’ve been reading many job descriptions lately related to my experience and curiously or not I observed that many organizations look for developers with Microsoft Dynamics experience in the CRM, respectively Finance and Operations (F&O) and Business Central (BC) areas. It’s a good sign that the adoption of Microsoft solutions for CRM and ERP increases, especially when one considers the progress made in the BI and AI areas with the introduction of Microsoft Fabric, which gives Microsoft a considerable boost. Conversely, it seems that the "developers are good for everything" syntagma is back, at least from what one reads in job descriptions. 

Of course, it’s useful to have an inhouse developer who can address all the aspects of an implementation, though that’s a lot to ask considering the different non-programming areas that need to be addressed. It’s true that a developer with experience can handle Requirements, Data and Process Management, respectively Data Migrations and Business Intelligence topics, though if one considers that each of the topics can easily become a full-time job before, during and post-project implementations. I’ve been there and I (hopefully) know that the jobs imply. Even if an experienced programmer can easily handle the different aspects, there will be also times when all the topics combined will be too much for a person!

It's not a novelty that job descriptions are treated like Christmas lists, but it’s difficult to differentiate between essential and nonessential skillset. I read many jobs descriptions lately in which among a huge list of demands, one of the requirements is to program in the F&O framework, sign that D365 programmers are in high demand. I worked for many years as programmer and Software Engineer, respectively in the BI area, where SQL and non-SQL code is needed. Even if I can understand the code in F&O, does it make sense to learn now to program in X++ and the whole framework? 

It's never too late to learn new tricks, respectively another programming language and/or framework. It even helps to provide better solutions in other areas, though frankly I would invest my time in other areas, and AI-related topics like AI prompting or Data Science seem to be more interesting in the long term, especially when they are already in demand!

There seems to be a tendency for Data Science professionals to do everything, building their own solutions, ignoring the experience accumulated respectively the data models built in BI and Data Analytics areas, as if the topics and data models are unrelated! It’s also true that AI-modeling comes with its own requirements in what concerns data modeling (e.g. translating non-numeric to numeric values), though I believe that common ground can be found!

Similarly, the notebook-based programming seems to replicate logic in each solution, which occasionally makes sense, though personally I wouldn’t recommend it as practice! The other day, I was looking at code developed in Python to mimic the joining of tables, when a view with the same could be easier (re)used, maintained, read and probably more efficient, even if different engines will be used. It will be interesting to see how the mix of spaghetti solutions will evolve over time. There are developers already complaining of the number of objects used in the process by building logic for each layer from the medallion architecture! Even if it makes sense from architectural considerations, it will become a nightmare in time.

One can wonder also about nomenclature used – Data Engineer or Prompt Engineering for the simple manipulation of data between structures in data transformations, respectively for structuring the prompts for AI. I believe that engineering involves more than this, no matter the context! 

Previous Post <<||>> Next Post

16 September 2024

🧭Business Intelligence: Mea Culpa (Part IV: Generalist or Specialist in an AI Era?)

Business Intelligence Series
Business Intelligence Series

Except the early professional years when I did mainly programming for web or desktop applications in the context of n-tier architectures, over the past 20 years my professional life was a mix between BI, Data Analytics, Data Warehousing, Data Migrations and other topics (ERP implementations and support, Project Management, IT Service Management, IT, Data and Applications Management), though the BI topics covered probably on average at least 60% of my time, either as internal or external consultant. 

I can consider myself thus a generalist who had the chance to cover most of the important aspects of a business from an IT perspective, and it was thus a great experience, at least until now! It’s a great opportunity to have the chance to look at problems, solutions, processes and the various challenges and opportunities from different perspectives. Technical people should have this opportunity directly in their jobs through the communication occurring in projects or IT services, though that’s more of a wish! Unfortunately, the dialogue between IT and business occurs almost only over the tickets and documents, which might be transparent but isn’t necessarily effective or efficient! 

Does working only part time in an area make one person less experienced or knowledgeable than other people? In theory, a full-time employee should get more exposure in depth and/or breadth, but that’s relative! It depends on the challenges one faces, the variation of the tasks, the implemented solutions, their depth and other technical and nontechnical factors like training, one’s experience in working with the various tools, the variety of the tasks and problem faced, professionalism, etc. A richer exposure can but not necessarily involve more technical and nontechnical knowledge, and this shouldn’t be taken as given! There’s no right or wrong answer even if people tend to take sides and argue over details.

Independently of job's effective time, one is forced to use his/her time to keep current with technologies or extend one’s horizon. In IT, a professional seldom can rely on what is learned on the job. Fortunately, nowadays one has more and more ways of learning, while the challenge shifts toward what to ignore, respectively better management of one’s time while learning. The topics increase in complexity and with this blogging becomes even more difficult, especially when one competes with AI content!

Talking about IT, it will be interesting to see how much AI can help or replace some of the professions or professionals. Anyway, some jobs will become obsolete or shift the focus to prompt engineering and technical reviews. AI still needs explicit descriptions of how to address tasks, at least until it learns to create and use better recipes for problem definition and solving. The bottom line, AI and its use can’t be ignored, and it can and should be used also in learning new things. It’s amazing what one can do nowadays with prompt engineering! 

Another aspect on which AI can help is to tailor the content to one’s needs. A high percentage in the learning process is spent on fishing in a sea of information for content that is worth knowing, respectively for a solution to one’s needs. AI must be able to address also some of the context without prompters being forced to give information explicitly!

AI opens many doors but can close many others. How much of one’s experience will remain relevant over the next years? Will AI have more success in addressing some of the challenges existing in people’s understanding or people will just trust AI blindly? Anyway, somebody must be smarter than AI, and here people’s collective intelligence probably can prove to be a real match. 

14 September 2024

🗄️Data Management: Data Governance (Part II: Heroes Die Young)

Data Management Series
Data Management Series

In the call for action there are tendencies in some organizations to idealize and overcharge main actors' purpose and image when talking about data governance by calling them heroes. Heroes are those people who fight for a goal they believe in with all their being and occasionally they pay the supreme tribute. Of course, the image of heroes is idealized and many other aspects are ignored, though such images sell ideas and ideals. Organizations might need heroes and heroic deeds to change the status quo, but the heroism doesn't necessarily payoff for the "heroes"! 

Sometimes, organizations need a considerable effort to change the status quo. It can be people's resistance to new, to the demands, to the ideas propagated, especially when they are not clearly explained and executed. It can be the incommensurable distance between the "AS IS" and the "TO BE" perspectives, especially when clear paths aren't in sight. It can be the lack of resources (e.g., time, money, people, tools), knowledge, understanding or skillset that makes the effort difficult. 

Unfortunately, such initiatives favor action over adequate strategies, planning and understanding of the overall context. The call do to something creates waves of actions and reactions which in the organizational context can lead to storms and even extreme behavior that ranges from resistance to the new to heroic deeds. Finding a few messages that support the call for action can help, though they can't replace the various critical for success factors.

Leading organizations on a new path requires a well-defined realistic strategy, respectively adequate tactical and operational planning that reflects organizations' specific needs, knowledge and capabilities. Just demanding from people to do their best is not enough, and heroism has chances to appear especially in this context. Unfortunately, the whole weight falls on the shoulders of the people chosen as actors in the fight. Ideally, it should be possible to spread the whole weight on a broader basis which should be considered the foundation for the new. 

The "heroes" metaphor is idealized and the negative outcome probably exaggerated, though extreme situations do occur in organizations when decisions, planning, execution and expectations are far from ideal. Ideal situations are met only in books and less in practice!

The management demands and the people execute, much like in the army, though by contrast people need to understand the reasoning behind what they are doing. Proper execution requires skillset, understanding, training, support, tools and the right resources for the right job. Just relying on people's professionalism and effort is not enough and is suboptimal, but this is what many organizations seem to do!

Organizations tend to respond to the various barriers or challenges with more resources or pressure instead of analyzing and depicting the situation adequately, and eventually change the strategy, tactics or operations accordingly. It's also difficult to do this as long an organization doesn't have the capabilities and practices of self-check, self-introspection, self-reflection, etc. Even if it sounds a bit exaggerated, an organization must know itself to overcome the various challenges. Regular meetings, KPIs and other metrics give the illusion of control when self-control is needed. 

Things don't have to be that complex even if managing data governance is a complex endeavor. Small or midsized organizations are in theory more capable to handle complexity because they can be more agile, have a robust structure and the flow of information and knowledge has less barriers, respectively a shorter distance to overcome, at least in theory. One can probably appeal to the laws and characteristics of networks to understand more about the deeper implications, of how solutions can be implemented in more complex setups.

🗄️Data Management: Data Culture (Part V: Quid nunc? [What now?])

Data Management Series
Data Management Series

Despite the detailed planning, the concentrated and well-directed effort with which the various aspects of data culture are addressed, things don't necessarily turn into what we want them to be. There's seldom only one cause but a mix of various factors that create a network of cause and effect relationships that tend to diminish or increase the effect of certain events or decisions, and it can be just a butterfly's flutter that stirs a set of chained reactions. The butterfly effect is usually an exaggeration until the proper conditions for the chaotic behavior appear!

The butterfly effect is made possible by the exponential divergence of two paths. Conversely, success needs probably multiple trajectories to converge toward a final point or intermediary points or areas from which things move on the "right" path. Success doesn't necessarily mean reaching a point but reaching a favorable zone for future behavior to follow a positive trend. For example, a sink or a cone-like structure allow water to accumulate and flow toward an area. A similar structure is needed for success to converge, and the structure results from what is built in the process. 

Data culture needs a similar structure for the various points of interest to converge. Things don't happen by themselves unless the force of the overall structure is so strong that allows things to move toward the intended path(s). Even then the paths can be far from optimal, but they can be favorable. Probably, that's what the general effort must do - bring the various aspects in the zone for allowing things to unfold. It might still be a long road, though the basis is there!

A consequence of this metaphor is that one must identify the important aspects, respectively factors that influence an organization's culture and drive them in the right direction(s) – the paths that converge toward the defined goal(s). (Depending on the area of focus one can consider that there are successions of more refined goals.)

The structure that allows things to converge is based on the alignment of the various paths and implicitly forces. Misalignment can make a force move in other direction with all the consequences deriving from this behavior. If its force is weak, probably will not have an impact over the overall structure, though that's relative and can change in time. 

One may ask for what's needed all this construct, even if it doesn’t reflect the reality. Sometimes, even a not entirely correct model can allow us to navigate the unknown. Model's intent is to depict what's needed for a initiative to be successful. Moreover, success doesn’t mean to shoot bulls eye but to be first in the zone until one's skillset enables performance.

Conversely, it's important to understand that things don't happen by themselves. At least this seems to be the feeling some initiatives let. One needs to build and pull the whole structure in the right direction and the alignment of the various forces can reduce the overall effort and increase the chances for success. Attempting to build something just because it’s written in documentation without understanding the whole picture (or something close to it) can easily lead to failure.

This doesn’t mean that all attempts that don’t follow a set of patterns are doomed to failure, but that the road will be more challenging and will probably take longer. Conversely, maybe these deviations from the optimal paths are what an organization needs to grow, to solidify the foundation on which something else can be built. The whole path is an exploration that doesn’t necessarily match what is written in books, respectively the expectations!

Previous Post <<||>> Next Post

11 September 2024

🗄️Data Management: Data Culture (Part IV: Quo vadis? [Where are you going?])

Data Management Series

The people working for many years in the fields of BI/Data Analytics, Data and Process Management probably met many reactions that at the first sight seem funny, though they reflect bigger issues existing in organizations: people don’t always understand the data they work with, how data are brought together as part of the processes they support, respectively how data can be used to manage and optimize the respective processes. Moreover, occasionally people torture the data until it confesses something that doesn’t necessarily reflect the reality. It’s even more deplorable when the conclusions are used for decision-making, managing or optimizing the process. In extremis, the result is an iterative process that creates more and bigger issues than whose it was supposed to solve!

Behind each blunder there are probably bigger understanding issues that need to be addressed. Many of the issues revolve around understanding how data are created, how are brought together, how the processes work and what data they need, use and generate. Moreover, few business and IT people look at the full lifecycle of data and try to optimize it, or they optimize it in the wrong direction. Data Management is supposed to help, and it does this occasionally, though a methodology, its processes and practices are as good as people’s understanding about data and its use! No matter how good a data methodology is, it’s as weak as the weakest link in its use, and typically the issues revolving around data and data understanding are the weakest link. 

Besides technical people, few businesspeople understand the full extent of managing data and its lifecycle. Unfortunately, even if some of the topics are treated in the books, they are too dry, need hands on experience and some thought in corroborating practices with theories. Without this, people will do things mechanically, processes being as good as the people using them, their value becoming suboptimal and hinder the business. That’s why training on Data Management is not enough without some hands-on experience!

The most important impact is however in BI/Data Analytics areas - how the various artifacts are created and used as support in decision-making, process optimization and other activities rooted in data. Ideally, some KPIs and other metrics should be enough for managing and directing a business, however just basing the decisions on a set of KPIs without understanding the bigger picture, without having a feeling of the data and their quality, the whole architecture, no matter how splendid, can breakdown as sandcastle on a shore meeting the first powerful wave!

Sometimes it feels like organizations do things from inertia, driven by the forces of the moment, initiatives and business issues for which temporary and later permanent solutions are needed. The best chance for solving many of the issues would have been a long time ago, when the issues were still small to create any powerful waves within the organizations. Therefore, a lot of effort is sometimes spent in solving the consequences of decisions not made at the right time, and that can be painful and costly!

For building a good business one needs also a solid foundation. In the past it was enough to have a good set of products that are profitable. However, during the past decade(s) the rules of the game changed driven by the acerb competition across geographies, inefficiencies, especially in the data and process areas, costing organizations on the short and long term. Data Management in general and Data Quality in particular, even if they’re challenging to quantify, have the power to address by design many of the issues existing in organizations, if given the right chance!

Previous Post <<||>> Next Post

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.