25 April 2025

💫🗒️ERP Systems: Microsoft Dynamics 365's Business Process Catalog (BPC) [Notes]

Disclaimer: This is work in progress intended to consolidate information from the various sources and not to provide a complete overview of all the features. Please refer to the documentation for a complete overview!

Last updated: 25-Apr-2025

Business Process Catalog - End-to-End Scenarios

[Dynamics 365] Business Process Catalog (BPC)

  • {def} lists of end-to-end processes that are commonly used to manage or support work within an organization [1]
    • agnostic catalog of business processes contained within the entire D365 solution space [3]
      • {benefit} efficiency and time savings [3]
      • {benefit} best practices [3]
      • {benefit} reduced risk [3]
      • {benefit} technology alignment [3]
      • {benefit} scalability [3]
      • {benefit} cross-industry applicability [3]
    • stored in an Excel workbook
      • used to organize and prioritize the work on the business process documentation [1]
      • {recommendation} check the latest versions (see [R1])
    • assigns unique IDs to 
      • {concept} end-to-end scenario
        • describe in business terms 
          • not in terms of software technology
        • includes the high-level products and features that map to the process [3]
        • covers two or more business process areas
        • {purpose} map products and features to benefits that can be understood in business contexts [3]
      • {concept} business process areas
        • combination of business language and basic D365 terminology [3]
        • groups business processes for easier searching and navigation [1]
        • separated by major job functions or departments in an organization [1]
        • {purpose} map concepts to benefits that can be understood in business context [3]
        • more than 90 business process areas defined [1]
      • {concept} business processes
        • a series of structured activities and tasks that organizations use to achieve specific goals and objectives [3]
          • efficiency and productivity
          • consistency and quality
          • cost reduction
          • risk management
          • scalability
          • data-driven decision-making
        • a set of tasks in a sequence that is completed to achieve a specific objective [5]
          • define when each step is done in the implementation [5] [3]
          • define how many are needed [5] [3]
        • covers a wide range of structured, often sequenced, activities or tasks to achieve a predetermined organizational goal
        • can refer to the cumulative effects of all steps progressing toward a business goal
        • describes a function or process that D365 supports
          • more than 700 business processes identified
          • {goal} provide a single entry point with links to relevant product-specific content [1]
        • {concept} business process guide
          • provides documentation on the structure and patterns of the process along with guidance on how to use them in a process-oriented implementation [3]
          • based on a catalog of business process supported by D365 [3]
        • {concept} process steps 
          • represented sequentially, top to bottom
            • can include hyperlinks to the product documentation [5] 
            • {recommendation} avoid back and forth in the steps as much as possible [5]
          • can be
            • forms used in D365 [5]
            • steps completed in LCS, PPAC, Azure or other Microsoft products [5]
            • steps that are done outside the system (incl. third-party system) [5]
            • steps that are done manually [5]
          • are not 
            • product documentation [5]
            • a list of each click to perform a task [5]
        • {concept} process states
          • include
            • project phase 
              • e.g. strategize, initialize, develop, prepare, operate
            • configuration 
              • e.g. base, foundation, optional
            • process type
              • e.g. configuration, operational
      • {concept} patterns
        • repeatable configurations that support a specific business process [1]
          • specific way of setting up D365 to achieve an objective [1]
          • address specific challenges in implementations and are based on a specific scenario or best practice [6]
          • the solution is embedded into the application [6]
          • includes high-level process steps [6]
        • include the most common use cases, scenarios, and industries [1]
        • {goal} provide a baseline for implementations
          • more than 2000 patterns, and we expect that number to grow significantly over time [1]
        • {activity} naming a new pattern
          • starts with a verb
          • describes a process
          • includes product names
          • indicate the industry
          • indicate AppSource products
      • {concept} reference architecture 
        • acts as a core architecture with a common solution that applies to many scenarios [6]
        • typically used for integrations to external solutions [6]
        • must include an architecture diagram [6]
    • {concept} process governance
      • {benefit} improved quality
      • {benefit} enhanced decision making
      • {benefit} agility adaptability
      • {benefit{ Sbd alignment
      • {goal} enhance efficiency 
      • {goal} ensure compliance 
      • {goal} facilitate accountability 
      • {concept} policy
      • {concept} procedure
      • {concept} control
    • {concept} scope definition
      • {recommendation} avoid replicating current processes without considering future needs [4]
        • {risk} replicating processes in the new system without re-evaluating and optimizing [4] 
        • {impact} missed opportunities for process improvement [4]
      • {recommendation} align processes with overarching business goals rather than the limitations of the current system [4]
    • {concept} guidance hub
      • a central landing spot for D365 guidance and tools
      • contains cross-application documentations
  • {purpose} provide considerations and best practices for implementation [6]
  • {purpose} provide technical information for implementation [6]
  • {purpose} provide link to product documentation to achieve the tasks in scope [6]
Previous Post <<||>> Next Post 

References:
[1] Microsoft Learn (2024) Dynamics 365: Overview of end-to-end scenarios and business processes in Dynamics 365 [link]
[2] Microsoft Dynamics 365 Community (2023) Business Process Guides - Business Process Guides [link]
[3] Microsoft Dynamics 365 Community (2024) Business Process Catalog and Guidance - Part 2 Introduction to Business Processes [link]
[4] Microsoft Dynamics 365 Community (2024) Business Process Catalog and Guidance - Part 3: Using the Business Process Catalog to Manage Project Scope and Estimation [link]
[5] Microsoft Dynamics 365 Community (2024) Business Process Catalog and Guidance - Part 4: Authoring Business Processes [link]
[6] Microsoft Dynamics 365 Community (2024) Business Process Catalog and Guidance - Part 5:  Authoring Business Processes Patterns and Use Cases [link]
[7] Microsoft Dynamics 365 Community (2024) Business Process Catalog and Guidance  - Part 6: Conducting Process-Centric Discovery [link]
[8] Microsoft Dynamics 365 Community (2024) Business Process Catalog and Guidance  - Part 7: Introduction to Process Governance [link]

Resources:
[R1] GitHub (2024) Business Process Catalog [link]
[R2] Microsoft Learn (2024) Dynamics 365 guidance documentation and other resources [link]
[R3] Dynamics 365 Blog (2025) Process, meet product: The business process catalog for Dynamics 365 [link]

Acronyms:
3T - Tools, Techniques, Tips
ADO - 
BPC - Business Process Catalog
D365 - Dynamics 365
LCS - Lifecycle Services
PPAC - Power Platform admin center
RFI - Request for Information
RFP - Request for Proposal

24 April 2025

🧭Business Intelligence: Perspectives (Part XXX: The Data Science Connection)

Business Intelligence Series
Business Intelligence Series

Data Science is a collection of quantitative and qualitative methods, respectively techniques, algorithms, principles, processes and technologies used to analyze, and process amounts of raw and aggregated data to extract information or knowledge it contains. Its theoretical basis is rooted within mathematics, mainly statistics, computer science and domain expertise, though it can include further aspects related to communication, management, sociology, ecology, cybernetics, and probably many other fields, as there’s enough space for experimentation and translation of knowledge from one field to another.  

The aim of Data Science is to extract valuable insights from data to support decision-making, problem-solving, drive innovation and probably it can achieve more in time. Reading in between the lines, Data Science sounds like a superhero that can solve all the problems existing out there, which frankly is too beautiful to be true! In theory everything is possible, when in practice there are many hard limitations! Given any amount of data, the knowledge that can be obtained from it can be limited by many factors - the degree to which the data, processes and models built reflect reality, and there can be many levels of approximation, respectively the degree to which such data can be collected consistently. 

Moreover, even if the theoretical basis seems sound, the data, information or knowledge which is not available can be the important missing link in making any sensible progress toward the goals set in Data Science projects. In some cases, one might be aware of what's missing, though for the data scientist not having the required domain knowledge, this can be a hard limit! This gap can be probably bridged with sensemaking, exploration and experimentation approaches, especially by applying models from other domains, though there are no guarantees ahead!

AI can help in this direction by utilizing its capacity to explore fast ideas or models. However, it's questionable how much the models built with AI can be further used if one can't build mechanistical mental models of the processes reflected in the data. It's like devising an algorithm for winning at lottery small amounts, though investing more money in the algorithm doesn't automatically imply greater wins. Even if occasionally the performance is improved, it's questionable how much it can be leveraged for each utilization. Statistics has its utility when one studies data in aggregation and can predict average behavior. It can’t be used to predict the occurrence of events with a high precision. Think how hard the prediction of earthquakes or extreme weather is by just looking at a pile of data reflecting what’s happening only in a certain zone!

In theory, the more data one has from different geographical areas or organizations, the more robust the models can become. However, no two geographies, respectively no two organizations are alike: business models, the people, the events and other aspects make global models less applicable to local context. Frankly, one has more chances of progress if a model is obtained by having a local scope and then attempting to leverage the respective model for a broader scope. Even then, there can be differences between the behavior or phenomena at micro, respectively at macro level (see the law of physics). 

This doesn’t mean that Data Science or AI related knowledge is useless. The knowledge accumulated by applying various techniques, models and programming languages in problem-solving can be more valuable than the results obtained! Experimentation is a must for organizations to innovate, to extend their knowledge base. It’s also questionable how much of the respective knowledge can be retained and put to good use. In the end, each organization must determine this by itself!

💎🏭SQL Reloaded: Microsoft Fabric's Lakehouses at Work (Part I: Proof-of-Concept)


Introduction

One way to work with the data files existing in organization is to import them into a lakehouse and build a data model based on them that can be reused in the various solutions (incl. Power BI). As a reminder, a lakehouse is a data architecture platform for storing, managing, and analyzing structured and unstructured data in a single location.

The tutorials available on lakehouses are pretty useful for getting an idea how to start, though probably each seasoned professional has his/her way of doing things, at least for testing the capabilities before building a proper solution. The target is thus, to create the minimum for testing the capabilities needed for a proof-of-concept solution. 

The files used in this post are available on Microsoft's GitHub. Download the files and store them to be easily available for the next steps. The following files were considered for the current post: customers.csv, orders.csv and products.csv.

Create the Lakehouse

It's recommended to have a naming convention in place for the various items created in a workspace, e.g. a combination between item type (lakehouse, warehouse), system type (Prod, UAT, Dev, PoC) and eventually department (e.g. FIN, SCM, HR, etc.). One should try to balance between meaning and usefulness. Ideally, one should use 2 maximum 4 letters for each part encoded in the name. For example, the following scripts were created in the LH_SCM_PoC lakehouse. 

More complex naming conventions can include the system (e.g. D365, CRM, EBS) or the company. The target is to easily identify the systems, independently how complex the rules are. Given that it can become challenging to rename the schemas afterwards, ideally the naming convention should be available from the early stages. 

Create the Schema

A lakehouse comes with a dbo schema available by default, though it's recommended to create meaningful schema(s) as needed. The schemas should ideally reflect the domain of the data (e.g. departments or other key areas) and the schemas shouldn't change when the objects are deployed between the different environments. Upon case, one should consider creating multiple schemas that reflect the model's key areas. The names should be simple and suggestive.

-- create schema
CREATE Schema Orders

Create a Staging Area

The next step is to create a staging area where the files in scope can be made available and then further loaded in the lakehouse. One needs to compromise between creating a deep hierarchical structure that reflects the business structure and the need to easily identify, respectively manage the files. An hierarchical structure with 1-2 level could provide the needed compromise, though each additional level tends to increase the complexity. 

One should also consider rules for archiving or even deleting the files.

Upload the Files

Microsoft Fabric allows users to upload multiple files together into a single step. Ideally the files should have proper names for each column, otherwise overheads deriving from this may appear later in the process. 

When the files are available in multiple folders in a complex hierarchical structure, a set of shortcuts could help in their management.

Load the Data

A file's data can be loaded into the lakehouse on the fly by providing a valid table name:
Files >> SCM_Orders >> (select file) >> Load to Tables >> new table >> Load file to new table >> (provide information) >> Load

Load file to new table


Of course, the table's name must be unique within the Schema and the further properties must define files' definition. 

One should consider loading first a couple of tables, performing a rough validation of the data imported, and only after that the remaining tables can be imported. This allows to identify the issues that typically lead to reimports of the data (wrong formatting, invalid column names, duplicated files, etc.) or rework.

If the files have different characteristics (e.g. delimiters, number of attributes/records, special data types), one should consider this in the initial scope and have at least one example from each category. 

Review the Metadata

Once the files were made available, there's the tendency to start directly with the development without analyzing the data, or equally important, the metadata available. To review the metadata of the tables newly created, one can use the objects from the standard INFORMATION_SCHEMA (see post):

-- retrieve the list of tables
SELECT * 
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'orders'
ORDER BY TABLE_SCHEMA  

Further on, one can review columns' definition:
 
-- retrieve column metadata
SELECT TABLE_CATALOG
, TABLE_SCHEMA
, TABLE_NAME
, COLUMN_NAME
, ORDINAL_POSITION
, DATA_TYPE
, CHARACTER_MAXIMUM_LENGTH
, NUMERIC_PRECISION
, NUMERIC_SCALE
, DATETIME_PRECISION
, CHARACTER_SET_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA = 'orders'
ORDER BY ORDINAL_POSITION

It's a good idea to save the metadata to a file and use it later for reviews, respectively for metadata management, when no other solution is in place for the same (e.g. Purview). That's useful also for the people with limited or no access to the workspace. 

Alternatively, one can use a notebook with the following SQL-based commands: 

%%sql

DESCRIBE TABLE LH_SCM_PoC.orders.sales;

DESCRIBE TABLE EXTENDED LH_SCM_PoC.orders.sales;

One can even provide meaningful descriptions for each table and its attributes via scripts like the ones below, however it might be a good idea to do this in the later phases of the PoC, when the logic become stable:

%%sql

-- modify a table's COMMENT
COMMENT ON TABLE LH_SCM_PoC.orders.sales IS 'Customer sales orders';

-- modify columns' COMMENT for an existing table
ALTER TABLE LH_SCM_DWH.orders.sales  
ALTER COLUMN SalesOrderNumber COMMENT 'Sales Order Number';

Data Validation

Before diving into building any business logic, besides identifying the primary, foreign keys and further attributes used in bringing the data together, it's recommended to get an overview of data's intrinsic and extrinsic characteristics relevant to the analysis. Some of the rules used typically for studying the quality of data apply to some extent also in here, though one needs to prioritize accordingly, otherwise one replicates the effort that's typically part of the Data Management initiatives. 

In addition, it's important to check how much the identified issues impact the business logic, respectively on whether the issues can be corrected to match the expectations. Often, no logic can compensate for major  data quality issues, and this can also affect  PoC's results as soon as the outcomes are validated against the expectations! 

Data Understanding 

Further on, it makes sense to get a high-level understanding of the data by looking at the distribution of values, respectively at the records participating in the joins. Of course, more similar queries can be built, though again, one should try to focus on the most important aspects!

The analysis could for example consider the following points:

/* validation of Products */

-- review duplicated product numbers (should be 0)
SELECT ProductName
, count(*) RecordCount
FROM orders.products
GROUP BY ProductName
HAVING count(*)>1

-- review most (in)expensive products
SELECT top 100 ProductID
, ProductName
, Category
, ListPrice 
FROM orders.products
ORDER BY ListPrice DESC --ASC

-- review category distribution
SELECT Category
, count(*) RecordCount 
FROM orders.products
GROUP BY Category
ORDER BY RecordCount DESC

-- review price ranges (
SELECT Len(floor(ListPrice)) RangeCount
, count(*) RecordCount 
FROM orders.products
GROUP BY Len(floor(ListPrice)) 
ORDER BY RangeCount DESC

/* validation of Customers */

-- duplicated email address 
SELECT CST.CustomerID
, CST.FirstName
, CST.LastName 
, CST.EmailAddress 
, DUP.RecordCount
FROM (-- duplicates
	SELECT EmailAddress
	, count(*) RecordCount 
	FROM orders.customers 
	GROUP BY EmailAddress 
	HAVING count(*)>1
	) DUP
	JOIN orders.customers CST
	   ON DUP.EmailAddress = CST.EmailAddress
ORDER BY DUP.RecordCount DESC
, DUP.EmailAddress 

-- duplicated Customer names (not necessarily duplicates)
SELECT CST.CustomerID
, CST.FirstName
, CST.LastName 
, CST.EmailAddress 
, DUP.RecordCount
FROM (-- duplicates
	SELECT FirstName
	, LastName
	, count(*) RecordCount 
	FROM orders.customers 
	GROUP BY FirstName
	, LastName 
	HAVING count(*)>1
	) DUP
	JOIN orders.customers CST
	   ON DUP.FirstName = CST.FirstName
      AND DUP.LastName = CST.LastName
ORDER BY DUP.RecordCount DESC
, DUP.FirstName
, DUP.LastName

/* validation of Orders */

-- review a typical order
SELECT SalesOrderID
, OrderDate
, CustomerID
, LineItem
, ProductID
, OrderQty
, LineItemTotal
FROM orders.orders
WHERE SalesOrderID = 71780
ORDER BY SalesOrderID 
, LineItem

-- review orders' distribution by month
SELECT Year(OrderDate) Year
, Month(OrderDate) Month
, count(*) RecordCount
FROM orders.orders
GROUP BY Year(OrderDate) 
, Month(OrderDate) 
ORDER BY Year
, Month

-- checking for duplicates
SELECT SalesOrderID
, LineItem
, count(*) RecordCount
FROM orders.orders ord 
GROUP BY SalesOrderID
, LineItem
HAVING count(*)>1

-- checking for biggest orders
SELECT SalesOrderID
, count(*) RecordCount
FROM orders.orders ord 
GROUP BY SalesOrderID
HAVING count(*) > 10
ORDER BY NoRecords DESC

-- checking for most purchased products
SELECT ProductID
, count(*) NoRecords
FROM orders.orders ord 
GROUP BY ProductID
HAVING count(*) > 8
ORDER BY NoRecords DESC

-- checking for most active customers
SELECT CustomerID
, count(*) RecordCount
FROM orders.orders ord 
GROUP BY CustomerID
HAVING count(*) > 10
ORDER BY RecordCount DESC

/* join checks */

-- Prders without Product (should be 0)
SELECT count(*) RecordCount
FROM orders.orders ord 
	 LEFT JOIN orders.products prd
	   ON ord.ProductID = prd.ProductID
WHERE prd.ProductID IS NULL

-- Prders without Customer (should be 0)
SELECT count(*) RecordCount
FROM orders.orders ORD 
	 LEFT JOIN orders.customers CST
	   ON ORD.CustomerID = CST.CustomerID
WHERE CST.CustomerID IS NULL

-- Products without Orders (153 records)
SELECT count(*) RecordCount
FROM orders.products prd
	 LEFT JOIN orders.orders ord 
	   ON prd.ProductID = ord.ProductID 
WHERE ord.ProductID IS NULL


-- Customers without Orders (815 records)
SELECT count(*) RecordCount
FROM orders.customers CST
	 LEFT JOIN orders.orders ORD
	   ON ORD.CustomerID = CST.CustomerID
WHERE ORD.CustomerID IS NULL

The more tables are involved, the more complex the validation logic can become. One should focus on the most important aspects.

Building the Logic

Once one has an acceptable understanding of the data entities involved and the relation between them, it's time to build the needed business logic by joining the various tables at the various levels of detail. One can focus on the minimum required, respectively attempt to build a general model that can address a broader set of requirements. For the PoC it's usually recommended to start small by addressing the immediate requirements, though some flexibility might be needed for exploring the data and preparing the logic for a broader set of requirements. Independently of the scope, one should consider a set of validations. 

Usually, it makes sense to encapsulate the logic in several views or table-valued functions that reflect the logic for the main purposes and which allow a high degree of reuse (see [1]). Of course, one can use the standard approach for modelling the bronze, silver, respectively the gold layers adopted by many professionals. For a PoC, even if  that's not mandatory, it might still be a good idea to make steps in the respective direction. 

In this case, dealing with only three tables - a fact table and two dimensions table - there are several perspectives that can be built:

a) all records from fact table + dimension records

The following view provides the lowest level of details for the fact table, allowing thus to look at the data from different perspectives as long as focus is only the values used is Sales Orders:

-- create the view
CREATE OR ALTER VIEW orders.vSalesOrders
-- Sales Orders with Product & Customer information
AS
SELECT ORD.SalesOrderID
, ORD.OrderDate
, ORD.CustomerID
, CST.FirstName 
, CST.LastName
, CST.EmailAddress
, ORD.LineItem
, ORD.ProductID
, PRD.ProductName 
, PRD.Category
, ORD.OrderQty
, ORD.LineItemTotal
, PRD.ListPrice 
, ORD.OrderQty * PRD.ListPrice ListPriceTotal
FROM orders.orders ORD 
	 JOIN orders.products PRD
	   ON ORD.ProductID = PRD.ProductID
	 JOIN orders.customers CST
	   ON ORD.CustomerID = CST.CustomerID

-- test the view   
SELECT *
FROM orders.vSalesOrders
WHERE SalesOrderID = 71780

One can use full joins unless some of the references dimensions are not available.  

b) aggregated data for all dimension combinations

The previous view allows to aggregate the data at the various levels of details:

-- Sales volume by Customer & Product
SELECT ORD.EmailAddress
, ORD.ProductName 
, ORD.Category
, SUM(ORD.OrderQty) OrderQty
, SUM(ORD.LineItemTotal) LineItemTotal
FROM orders.vSalesOrders ORD 
WHERE ORD.OrderDate >= '2022-06-01'
  AND ORD.OrderDate < '2022-07-01'
GROUP BY ORD.EmailAddress
, ORD.ProductName 
, ORD.Category
ORDER BY ORD.EmailAddress
, ORD.ProductName 

One can comment out the dimensions not needed. The query can be included in a view as well. 

c) all records from each dimension table + aggregated fact records

Sometimes, it's useful to look at the data from a dimension's perspective, though it might be needed to create such an object for each dimension, like in the below examples. For the maximum of flexibility the logic can be included in a table-valued function:

-- create the user-defined function
CREATE OR ALTER FUNCTION orders.tvfProductsSalesVolume(
    @StartDate date NULL,
    @EndDate date NULL
)
RETURNS TABLE
-- Sales volume by Product
AS
RETURN (
SELECT PRD.ProductID
, PRD.ProductName 
, PRD.Category
, ORD.FirstOrderDate
, ORD.LastOrderDate 
, IsNull(ORD.TotalSalesQty, 0) TotalSalesQty 
, IsNull(ORD.TotalSalesValue, 0) TotalSalesValue
, IsNull(ORD.OrderCount, 0) OrderCount
, IsNull(ORD.LineCount, 0) LineCount
FROM orders.products PRD
     OUTER APPLY (
		SELECT Min(ORD.OrderDate) FirstOrderDate
		, Max(ORD.OrderDate) LastOrderDate 
		, SUM(ORD.OrderQty) TotalSalesQty
		, SUM(ORD.LineItemTotal) TotalSalesValue
		, count(DISTINCT SalesOrderID) OrderCount
		, count(*) LineCount
		FROM orders.orders ORD 
		WHERE ORD.ProductID = PRD.ProductID
		  AND ORD.OrderDate >= @StartDate 
		  AND ORD.OrderDate < @EndDate 
	 ) ORD
);

-- test the user-defined function
SELECT *
FROM orders.tvfProductsSalesVolume('2022-06-01','2022-07-01') PRD
WHERE TotalSalesValue <> 0
ORDER BY TotalSalesValue DESC
, LastOrderDate DESC


-- create the user-defined function
CREATE OR ALTER FUNCTION orders.tvfCustomersSalesVolume(
    @StartDate date NULL,
    @EndDate date NULL
)
RETURNS TABLE
-- Sales volume by Customer
AS
RETURN (
SELECT CST.CustomerID
, CST.FirstName 
, CST.LastName
, CST.EmailAddress
, ORD.FirstOrderDate
, ORD.LastOrderDate 
, IsNull(ORD.TotalSalesValue, 0) TotalSalesValue
, IsNull(ORD.OrderCount, 0) OrderCount
, IsNull(ORD.LineCount, 0) LineCount
FROM orders.customers CST
     OUTER APPLY (
		SELECT Min(ORD.OrderDate) FirstOrderDate
		, Max(ORD.OrderDate) LastOrderDate 
		, SUM(ORD.LineItemTotal) TotalSalesValue
		, count(DISTINCT SalesOrderID) OrderCount
		, count(*) LineCount
		FROM orders.orders ORD 
		WHERE ORD.CustomerID = CST.CustomerID
		  AND ORD.OrderDate >= @StartDate 
		  AND ORD.OrderDate < @EndDate 
	 ) ORD
);

-- test the user-defined function
SELECT *
FROM orders.tvfCustomersSalesVolume('2022-06-01','2022-07-01') PRD
WHERE TotalSalesValue <> 0
ORDER BY TotalSalesValue DESC
, LastOrderDate DESC

When restructuring the queries in similar ways, there's always a compromise between the various factors: (re)usability, performance or completeness. 

Further Comments

The above database objects should allow users to address most of the requirements, though, as usual, there can be also exceptions, especially when the data needs to be aggregated at a different level of detail that requires the query to be structured differently.

The number of perspectives can increase also with the number of fact tables used to model a certain entity (e.g. Sales order headers vs. lines). For example, 

In theory, one can also find ways to automate the process of creating database objects, though one must choose the relevant attributes, respectively include logic that makes sense only within a certain perspective. 

No matter the data, respectively systems used as source, expect surprises and test your assumptions! For example, in the file used to create the orders.customers table, there seem to be duplicated entities with the same name and email address. One must clarify how such entities must be handled in data analysis, respectively in data modeling. For example, a person can appear twice because of the roles associated with the name or can be other entitled reasons. 

The files in scope of this post are small compared with the files existing in organizations. In many scenarios files' size could range from GB to TB and thus require partitioning and different other strategies. 

|>> Next Post

References
[1] sql-troubles (2023) Architecture Part IV: Building a Modern Data Warehouse with Azure Synapse [link]

Resources
[1] Microsoft Learn (2024) Fabric: Lakehouse and Delta Lake tables [link]

20 April 2025

🧮ERP: Implementations (Part XVIII: The Price of Partnership)


ERP Implementations Series
ERP Implementations Series

When one proceeds on a journey, it’s important to have a precise destination, a good map to show the road, the obstacles ahead, and help to plan the journey, good gear, enough resources to make it through the journey, but probably more important, good companions and ideally guides who can show the way ahead and offer advice when needed. This is in theory the role of a partner, and on such coordinates should be a partnership based upon. However, unless the partners pay for the journey as well, the partnership can come with important costs and occasionally more overhead than needed. 

The traveler’s metaphor is well suited to ERP implementations and probably many other projects in which the customer doesn’t have the knowledge about the various aspects of the project. The role of a partner is thus multifold, and it takes time for all the areas to be addressed. Typically, it takes years for such a relationship to mature to the degree that it all develops naturally, at least in theory. Conversely, few relationships can resist in time given the complex challenges resulting from different goals and objectives, business models, lack of successes or benefits.

Usually, a partnership means sharing the risks and successes, but more importantly, building a beneficial bidirectional relationship from which all parties can profit. This usually means that the partner provides a range of services not available in-house, allowing customers to pull resources and knowledge on a need-by basis, providing direction and other advice whenever is needed. A partner can help if it has the needed insight into the business, and this implies a minimum of communication in respect to business decisions, strategies, goals, objectives, requirements, implications, etc. 

During sales pitches and other meetings, many service providers assume themselves the role of partners, however between their behavior and the partner role is usually a considerable gap that often may seem impossible if not difficult to bridge. It’s helpful to define as part of the various contracts the role of partnership, respectively the further implications. It’s helpful to have a structure of bidirectional bonuses and other benefits that would help to strengthen the bond between organizations. A framework for supporting the partnership must be built, and this takes time to be implemented adequately. 

Even if some consultants are available from the early stages of the partnership, that’s typically the exception and not the norm. It’s typical for resources to be involved only for the whole duration of a project or less. Independently of their performance, the replacement of resources in projects is unavoidable and must be addressed adequately, with knowledge transfer and all that belongs to such situations. Moreover, it needs to be managed adequately by the serve provider, however resources can’t be replaced as the parts of an engine. The planning and other activities must consider and accommodate such changes.

Also the replacement of partners in mid of the project is possible and this option should be considered as exception in projects and planned accordingly. The costs of working with partners can be high and therefore organizations should consider the alternatives. Bringing individual resources in projects and even building long-term relationships with them can prove to be a cost-effective alternative. Even if such partnerships are more challenging to manage, the model can offer other advantages that compensate for the overhead of managing them.

Outsourcing resources across geographies or mixing models can work as well. Even if implementations usually don’t allow for experiments, they can still be a feasible alternative. The past successes and failures are usually a good measure of what works and doesn't for organizations. 

Previous Post <<||>> Next Post 

🧮ERP: Implementations (Part XVII: Taming the Monsters)

ERP Implementations Series
ERP Implementations Series
 
Given their extensive scope, duration, investment and complexity, ERP implementations are probably one of the most complex endeavors pursued by organizations. Moreover, they are often a matter of endurance with many junctions, skirts, turns, shortcuts, ups and downs, a continuous carousel in which the various issues tend to misbehave like little monsters, many of them haunting one’s dreams unexpectedly during and long after implementations.

Probably, the main drivers are the scale and mass of such projects as they touch all or most important aspects of organizations. Just consider the typical project done for a single department and multiply its complexity by a constant number representing the number of departments in scope. And the more one goes into details, the higher the complexity. To move forward the parties need to compromise, and as no one wants to do that, the discussions are prolonged, the discussions get personal, issues are escalated, and probably more negative effects can be met.

Tensions can be rooted in politics, in the friction between different goals, in the need to prioritize requirements, postponing or leaving things out of scope, or by pushing an agenda other parties don't agree with. Besides the typical constraints of projects, there’s the complexity of performing a huge amount of work within a limited period, time during which the resources must be available, the quality must match the expectations, and there are so many aspects to be considered!

Of course, not all implementations are like this, though each such project is a real exam of maturity for the people involved in it. Sometimes, it’s better to have people who care about the decisions made. On the opposite side, there are organizations that go almost blindly with the solutions suggested to them, with all the effects resulting from this. Probably, the middle way between these two extremes is more indicated, though it’s hard to find such a path through all complexity.

An ERP implementation is highly dependent on the initial conditions under which the project has started, the commitment made by the various parties involved in the project, the way resources are made available, on what’s considered in plan, on the communication that takes place, the planning done and its enforcement, etc. Of course, some topics can be addressed also later, though delays tend to create more delays that can have a ripple effect through the project. Under normal circumstances the backlog and other aspects can be manageable, though it’s enough for a few issues to gather momentum so that their cumulative impact can have an exponential impact.

Certain sensitive project topics can easily lead to crises and abnormal behavior, though such situations are usually exceptions (until they are not). It’s important to have in place the processes and procedures that can be used to address this kind of situation, and, not less important, have them communicated to the team. Moreover, it’s not necessary to reinvent the wheel - the processes defined in IT and project methodologies can be used and adapted for this purpose.

It's important to have in place all the processes, procedures and checkpoints needed to support the project. The people participating in a project should have some hands-on experience with them, including the exceptions (e.g. escalation procedures). It’s useful to have a mentor or some experienced person who can help with advice and even attend meetings and provide constructive feedback. Just having some awareness sessions with no feedback can be as dangerous as not having any training at all! It’s suboptimal to use the implementation itself as an environment for learning though in extremis this approach may work as well.

Previous Post <<||>> Next Post

19 April 2025

🧮ERP: Implementations (Part XVI: It’s All About Politics)

ERP Implementations Series
ERP Implementations Series

An ERP implementation takes place within a political context and politics can make or break implementations. Politics occurs whenever individuals or organization groups interact to make decisions that affect parts or the whole organization. Besides decision-making there are further components that revolve around the various types of resources allocation and management, resulting in power dynamics that shape and pull organizations in politically charged directions.

Given the deep implications of ERP systems, probably in no other type of projects the political aspects are that visible and stringent to all employees to the degree that they pull decisions in one direction independently of the actual requirements. It may seem incredible, though there are cases in which ERP systems were selected just because the organization’s CEO played golf with the vendor’s CEO. In the end, the gaps between systems should be minimal nowadays, at least in theory, isn’t it?

Of course, just because one meets certain strange behaviors, it doesn’t mean that this is common practice! There are higher chances of selecting an inadequate system just because the sales representative did a good job and convinced the audience that the system can do anything they want. It probably does if coins are used for each missing feature, and in the long term it can be a lot of coins. Conversely, even if a system satisfies nowadays’ requirements, it doesn’t mean it will continue to do the same with future requirements. Only the future can tell whether the choice of a system over the others was a good one.

The bigger the gaps between the various interests, the more difficult it becomes to pull the project in the right direction. Probably the best way to demonstrate why one system is better than another is by bringing facts and focusing on the main requirements of the organization. This supposes the existence of an explicit list of requirements with a high-level description of how they can be addressed by the future system. This might not be enough, though it’s a good start, a good basis for discussion, for making people aware of the implications. However, doing this exercise for 2-3 or more systems is not cost effective, as such analysis can become time-consuming and expensive.

One way to address political resistance is by discussing openly with the stakeholders and addressing their concerns, arguing why the system is a good choice, what can be done to address the gaps, and so on. It will not always be enough, though it’s important to establish common ground for further discussions. Further on, it’s important to keep the same openness and disposition for communication given that the further the project progresses, the higher the likelihood of other concerns to appear. It’s a never-ending story if there are gaps between needs and what the system provides.

It's important to establish clear and honest communication with the stakeholders, informing them proactively about the challenges faced, independently in which area they are faced. Conversely, too much communication can be disruptive and can create other challenges. One way to cope with this is by identifying the communication needs of each stakeholder and trying to identify what’s the volume of information, respectively the communication needs of each of them. That’s project management 1:1.

The Project Manager and his team should ideally anticipate and address the potential conflicts timely, before they propagate and reach a broader audience. It’s questionable how much can be achieved proactively, especially when the project keeps everybody busy. The tendency is to answer politics with politics, though brainstorming sessions, open communication and a few other approaches can reach deeper where politics can’t.

18 April 2025

🧮ERP: Implementations (Part XV: An Ecosystemic View)

ERP Implementation Series
ERP Implementations Series

In organizations, the network of interconnected systems, technologies, applications, users and all what resides around them, physically or logically, through their direct or indirect interactions form an ecosystem governed by common goals, principles, values, policies, processes and resources. An ERP system becomes thus part of the ecosystem and, given its importance, probably becomes the centerpiece of the respective structure, having the force to shift towards it all the flows existing in organizations, respectively nourishing its satellite systems, enabling them to create more value.

Ecology’s components should allow for growth, regeneration, adaptation, flow, interaction and diversity, respectively creating value for the organizations. The more the flow of information, data, products, and money is unrestricted and sustained by the overall architecture, the more beneficial they become. At least, this is the theoretical basis on which such architectures are built upon. Of course, all this doesn’t happen by itself, but must be sustained and nourished adequately!

The mixture of components can enable system interoperability, scalability, reliability, automation, extensibility or maintainability with impact on collaboration, innovation, sustainability, cost-efficiency and effectiveness. All of these are concepts with weight and further implications for ecology. Some aspects are available by design, while others need to be introduced and enforced in a systematic, respectively systemic manner. It’s an exploration of what’s achievable, of what works, of what creates value. Organizations should review the best practices in the field and explore.

Ideally, the ERP system should nourish the ecology it belongs to, enabling it to grow and create more value for the organization. Conversely, the ERP system can become a cannibal for the ecology, pulling towards it all the resources, but more importantly, limiting the other systems in terms of functionality, value, usage, etc. In other terms, an ERP system has the power to cannibalize its environment, with all the implications deriving from this. This may seem like a pessimistic view, though it’s met in organizations more often than people think. Just think about the rate of failure in ERP implementations, the ERP cannibalizing thus all the financial and human resources available at their disposal.

To create more value, the ERPs should be integrated with the other critical and non-critical information systems – email and other communication systems, customer and vendor relationship systems, inventory management systems, regulatory reporting platforms and probably there are many other systems designed to address the various needs. Of course, not every organization needs all these types of information systems, though the functions exist to some degree in many organizations. Usually, components grow in importance with the shift of needs and attention.

There are also small organizations that only use small subparts of the ERP system (e.g. finance, HR), however an ERP investment becomes more attractive and cost-effective, the further the organization can leverage all its (important) features. It’s also true that an ERP system is not always the solution for everybody. It’s a matter of scale, functionality, and business model. Above all, the final solution must be cost-effective and an enabler for the organization!

Beyond the ecosystemic view, there are design principles and rules that can be used to describe, map and plan the road ahead, though nothing should be considered as fixed. One needs to balance between opportunities and risks, value and costs, respectively between the different priorities. In some scenarios there’s a place for experimentation, while in other scenarios organizations must stick to what they know. Even if there are maybe recipes for success, there’s no guarantee behind them. Each organization must evaluate what works and what doesn’t. It’s a never-ending story in which such topics need to be approached gradually and iteratively.

Previous Post <<||>> Next Post

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 25 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.