14 October 2023

🧭Business Intelligence: Perspectives (Part VII: Insights - Aha' Moments)

Business Intelligence Series
Business Intelligence Series

On one side scientists talk about 'Insight' with a sign of reverence when referring to the processes, patterns, models, metaphors, stories and paradigms used to generate and communicate insight. Conversely, data professionals seem to regard 'Insight' as something trivial, achievable just by picking and combining the right visualizations and storytelling. Are the scientists exaggerating when talking about insight, or do the data professionals downplay the meaning and role of insight? Or maybe the scientific and business contexts have incomparable complexity, even if the same knowledge toolset are used?

One probably can't deny the potentiality of tools or toolsets like data visualization or data storytelling in providing new information or knowledge that leads to insights, though between potential usefulness and harnessing that potential on a general basis there's a huge difference, no matter how much people tend to idealize the process (and there's lot of idealization going on). Moreover, sometimes the whole process seems to look like a black box in which some magic happens and insight happens.

It's challenging to explain the gap as long as there's no generally accepted scientific definition of insights, respectively an explanation of how insights come into being. Probably, the easiest way to recognize their occurrence is when an 'Aha' moment appears, though that's the outcome of a process and gives almost no information about the process itself. Thus, insight occurs when knowledge about the business is acquired, knowledge that allows new or better understanding of the data, facts, processes or models involved. 

So, there must be new associations that are formed, either derived directly from data or brought to surface by the storytelling process. The latter aspect implies that the storyteller is already in possession of the respective insight(s) or facilitates their discovery without being aware of them. It further implies that the storyteller has a broader understanding of the business than the audience, which is seldom the case, or that the storyteller has a broader understanding of the data and the information extracted from the data, and that's a reasonable expectation.

There're two important restrictions. First, the insight moments must be associated with the business context rather than with the mere use of tools! Secondly, there should be genuine knowledge, not knowledge that the average person should know, respectively the mere confirmation of expectations or bias. 

Understanding can be put in the context of decision making, respectively in the broader context of problem solving. In the latter, insight involves the transition from not knowing how to solve a problem to the state of knowing how to solve it. So, this could apply in the context of data visualization as well, though there might exist intermediary steps in between. For example, in a first step insights enable us to understand and define the right problem. A further step might involve the recognition of the fact the problem belongs to a broader set of problems that have certain characteristics. Thus, the process might involve a succession of 'Aha' moments. Given the complexity of the problems we deal with in business or social contexts, that's more likely to happen. So, the average person might need several 'Aha' moments - leaps in understanding - before the data can make a difference! 

Conversely, new knowledge and understanding obtained over successive steps might not lead to an 'Aha' moment at all. Whether such moments converge or not to 'Aha' moments may rely on the importance of the overall leap, though other factors might be involved as well. In the end, the emergence of new understanding is enough to explain what insights mean. Whether that's enough is a different discussion!

Previous Post <<||>> Next Post 


10 October 2023

💫Base Enums Metadata in Dynamics 365 for Finance and Operations

The list of values that don't have their own tables are managed within the application as (Base) Enums and, naturally, only the numeric values being saved to the database. Even if this is practical for the application, it's a nightmare for the people using the data exported from database as is needed to convert the codes to meaningful values.

Fortunately, there's a way to go around this limitation. First, there's the old obsolete AX 2012 documentation for Base Enums, which documents the Names and Descriptions for the Enum Values (e.g., search for CustVendNegInstStatus). As Microsoft stopped maintaining the documentation, the changes made in D365 are not reflected.

Secondly, the mapping between Values and Names is stored in two of the tables available in D365, the following query allowing to retrieve the mapping:

-- Metadata - Base Enum Values 
SELECT EIT.Id 
, EIT.Name EnumName
, EVT.EnumValue EnumValueId
, EVT.Name EnumValueName 
, ' WHEN ' + Cast(EVT.EnumValue as nvarchar(10)) + ' THEN ''' + EVT.Name + '''' [CASE Statement]
FROM dbo.EnumValueTable EVT
     JOIN dbo.EnumIdTable EIT
       ON EVT.EnumId = EIT.Id
WHERE EIT.Name LIKE 'CustVendNegInstStatus%'
--WHERE EVT.Name LIKE '%Redrawn%'
ORDER BY EIT.Name 
, EVT.EnumValue 

The [Case Statement] column generates already the statement for the CASE. One can search for the Base Enum, respectively for the value itself. 

Unfortunately, the metadata are not complete and, as there's no timestamp, it's not possible to identify what and when changed. Therefore, one needs to regularly check the changes manually, or store a copy of the respective data as baseline, thus being able to compare the latest metadata against the baseline. The solution is not ideal, but it does the trick.

To identify the Base Enum corresponding to a field one can export the script behind a form when the functionality is available. Otherwise, one can use the Table Browser and generate the script from the corresponding source table. 

Occasionally, the value stored in the above tables may deviate slightly from the expectation. It's useful then to search only for a substring, use both sources and even search on the internet for more information.

When building reports or a semantic layer on top of D365 data model, for almost all important tables is needed to retrieve such mappings. It's a good idea to encapsulate the mappings in single "base views" and reuse it when needed, avoiding thus to replicate the mappings for each use of the source table (see the logic for TDM.vEcoResProduct).

The data generated by the above script could be dumped to a table and thus the table joined in the queries when the values for a Base Enum are needed, though I don't recommend such an approach because it would increase the complexity of the queries, already complex in certain scenarios. Also the misuse of views for creating pseudoconstants is not recommended. 

Happy coding!

Previous Post <<||>> Next Post

💫Data Warehousing and Dynamics 365 for Finance and Operation - A Few Issues to Consider I

Data Warehousing
Data Warehousing Series

Introduction

Besides the fact that data professionals don't have direct access to D365 F&O production environments (direct access is available only to sandboxes), which was from the beginning an important constraint imposed by the architecture, there are a few more challenges that need to be addressed when working with the data.

Case Sensitiveness

SQL Server is not case sensitive, therefore, depending on the channel though which the data came, values appear either in upper or lower case, respectively a mixture of both. Even if this isn't an issue in D365, it can become an issue when the data leave the environment. E.g., PowerQuery is case sensitive (while DAX is case insensitive), thus, if a field containing a mix of values participate in a join or aggregation, this will result in unexpected behavior (e.g., duplicates, records ignored). It's primarily the case of the Company (aka DataAreaId) field available in most of the important tables.

The ideal solution would be to make sure that the values are correct by design, however this can't be always enforced. Otherwise, when using the data outside of D365 F&O the solution would be to transform all the values in upper case (or lower case). However, also this step might occur too late. E.g., when the data are exported to the Azure Data Lake in parquet file format.

Unique Keys

A unique record in D365 F&O was in earlier versions usually identified by the RecId and DataAreaId, while later the Partition field was added. This means that most of the joins will need to consider all 3 columns, which adds some overhead. In some environments there's only a Partition defined (and thus the field can be ignored), however this is not a warranty. 

As long developers use SQL there's no issue of using multiple fields in JOINs, though in PowerQuery there must be created a unique key based on the respective records so the JOINs are possible. Actually, also SQL-based JOINs would benefit if each record would be identified by one field.

Audit Metadata

Not all tables have fields that designate the date when a record was created or last modified, respectively the user who performed the respective action. The fields can be added manually when setting up the system, however that's seldom done. This makes it difficult to audit the records and sometimes it's a challenge also for reporting, respectively for troubleshooting the differences between DWH and source system. Fortunately, the Export to Data Lake adds a timestamp reflecting the time when the record was synchronized, though it can be used then only for the records synchronized after the first load. 

Tables vs. Entities

Data are modified in D365 F&O via a collection of entities, which are nothing but views that encapsulate the business logic, being based on the base tables or other views, respectively a combination of both. The Export to Data Lake (*) is based on the tables, while Link to Data Lake is based on data entities. 

Using the base tables means that the developer must reengineer the logic from the views. For some cases it might work to create the entities as views in the DWH environment though some features might not be supported. It's the case of serverless and dedicated SQL pools, that support only a subset from the features available under standard Azure SQL Server. 

The developer can try to replicate the logic from entities, considering only the logic needed by the business, especially when only a subset from the functionality available in the entity was used. The newly created views can become thus more readable and maintainable. On the other side, if the logic in entity changed, the changes need to be reflected also in the DWH views. 

Using the entity-based data makes sure that the data are consistent between environments. Unfortunately, Microsoft found out that isn't so easy to synchronize the data at entity level. Moreover, there are multiple entities based on the same table that reflect only a subset of the columns or rows. Thus, to cover all the fields from a base table, one might be forced to synchronize multiple views, leading thus to data duplication.  

In theory, both mechanisms can be used within the same environment, even if this approach is against the unique source of truth principle, when data are duplicated. 

Data Validation in the Data Lake

One scenario in which both sources are useful is when validating whether the synchronization mechanism worked as expected. Thus, one can compare the number of records and check whether there are differences that can't be mitigated. However, does it make sense to "duplicate" database objects only for this purpose?

Ideally, to validate whether a record was synchronized should be done in the source environment (e.g. via a timestamp). That's difficult to achieve, especially when there's no direct access to the source database (as is the case for Production databases). Fortunately, Dataverse provides this functionality, even if might not be bullet proof. 

In extremis, the most reliable approach is to copy the production environment on a sandbox and do a count of records for each table, using as baseline for comparison the time when the refresh occurred.

Base Enum Values

The list of values that don't have their own tables are managed within the application as Base Enums and, naturally, only the numeric values being saved to the database. Even if this is practical for the application, it's a nightmare for the people using the data exported from database as is needed to convert the codes to meaningful values. Some of the mappings between the codes and values are documented in two system tables, and even in old language-based documentation, though both sources are far from complete. As alternative, one can try to discover the values in the system. 

Unfortunately, the mappings need to be repeated when the Enum-based attributed is used in multiple places. One can reduce mapping's duplication by encapsulating the logic into a view (aka "base view") and reused accordingly (see the logic for TDM.vEcoResProduct).

Even if the values for many of the Enums are stored into the EnumValueTable table, Enum's name being available in EnumIdTable table, it's not a good idea to retrieve the values via a JOIN in the business logic. This would complicate the business logic unnecessarily. A CASE is more efficient even if occasionally more difficult to maintain. Unfortunately, there's no timestamp to identify which values were added lately.

Note:
(*) Existing customers have until 1-Nov-2024 to transition from Export to Data lake to Synapse link. Microsoft advises new customers to use Synapse Link.

Previous Post <<||>> Next Post

🪄💫SSRS (& Paginated Reports): Dynamics 365 F&O (Part II: Products Master Report via Base Tables)

As mentioned in the previous post, building the Products Master paginated report via the base D365 FO tables synchronized in Data Lake involves more effort and implies duplicating the logic from the used data entity(-ies). One can attempt recreating the entities 1:1 as they are defined in the D365 database (an SQL Server Azure database), however there are high the chances that in certain areas (e.g. financial dimensions) the features used in building the logic are not supported. Moreover, it might be the case that only a subset of the functionality was implemented, and with this, pieces of logic becoming obsolete. 

There's also a limitation on the number of tables which can be synchronized, and therefore one needs to compromise. In the dbo.EcoResProductV2Entity there are about 20 tables involved, though probably only half of them are used. The base table dbo.EcoResProduct contains most of the fields available also in the data entity, though the names for the various attributes need to be retrieved from the various tables. 

Some of the issues met in working with data entities appear in this scenario as well: (2) there are fields based on Enums that store only the code and it's needed to map the corresponding values, (4) further transformations are needed (e.g., converting fields to formats). It makes sense to encapsulate similarly the logic in views, though this time one or two layers of logic are needed in addition. One can choose to do this for each table in scope, or replicate data entity's logic by focusing only on the needed parts. Both approaches will prove to be challenging in certain scenarios. Let's consider the latter:

CREATE OR ALTER VIEW TDM.vEcoResProductV2Entity
AS 
/*
name:  Products - Master (base view)
created: 01.04.2021
modified: 01.04.2021
*/
SELECT ITM.ProductType
, CASE ITM.ProductType
    WHEN 1 THEN 'Item'
	WHEN 2 THEN 'Service'
  END ProductTypeName
, CAST ((CASE WHEN ITM.InstanceRelationType = 13678 THEN 1 ELSE 2 END) AS INT) ProductSubtype
, CASE 
    WHEN ITM.InstanceRelationType = 13678 THEN 'Product'
	ELSE 'Product Master'
   END ProductSubtypeName
, ITM.DisplayProductNumber ProductNumber
, Replace(Replace(ILT.Name, char(10), ' '), char(13), ' ')  ProductName
, Replace(Replace(ILT.Description, char(10), ' '), char(13), ' ') ProductDescription
, PCA.CategoryName RetailProductCategoryName 
, PCA.CategoryCode RetailProductCategoryCode
, PDG.Name ProductDimensionGroupName 
, SDG.Name StorageDimensionGroupName 
, TDG.Name TrackingDimensionGroupName 
, ITM.RetailColorGroupId ProductColorGroupId 
, ITM.RetailSizeGroupId ProductSizeGroupId 
, ITM.RetailStyleGroupId ProductStyleGroupId
, ITM.VariantConfigurationTechnology
, CASE ITM.VariantConfigurationTechnology
	WHEN 0 THEN 'None'
	WHEN 1 THEN 'Predefined Variants'
    WHEN 2 THEN 'Dimension Based'
	WHEN 3 THEN 'RuleBased'
	WHEN 4 THEN 'Constraint Based'
  END VariantConfigurationTechnologyName
, CASE WHEN KIT.ProductMaster IS NOT NULL THEN 1 ELSE 0 END IsProductKit
, CASE WHEN KIT.ProductMaster IS NOT NULL  THEN 'Yes' ELSE 'No' END IsProductKitName
, ITM.PDSCWProduct IsCatchWeightproduct
, CASE ITM.PDSCWProduct WHEN 1 THEN 'Yes' ELSE 'No' END IsCatchWeightproductName
, ITM.IsProductVariantUnitConversionEnabled
, CASE ITM.IsProductVariantUnitConversionEnabled WHEN 1 THEN 'Yes' ELSE 'No' END IsProductVariantUnitConversionEnabledName
-- system
, PPD.ProductDimensionGroup ProductDimensionGroupRecId
, PSG.StorageDimensionGroup StorageDimensionGroupRecId
, PTD.TrackingDimensionGroup TrackingDimensionGroupRecId
, PCA.RetailCategoryRecId
, ITM.RecId
, ITM.Partition
FROM dbo.EcoResProduct ITM
     LEFT JOIN dbo.EcoResProductTranslation ILT
	   ON ITM.RecId = ILT.Product 
	  AND ITM.Partition = ILT.Partition 
	  AND ILT.LanguageId = 'en-us'
     LEFT JOIN dbo.EcoResProductDimensionGroupProduct PPD
	   ON ITM.RecId = PPD.Product
	  AND ITM.Partition = PPD.Partition
	      LEFT JOIN dbo.EcoResProductDimensionGroup PDG
		    ON PPD.ProductDimensionGroup = PDG.RecId 
		   AND PPD.Partition = PDG.Partition 
	 LEFT JOIN dbo.EcoResStorageDimensionGroupProduct PSG
	   ON ITM.RecId = PSG.Product
	  AND ITM.Partition = PSG.Partition
	      LEFT JOIN dbo.EcoResStorageDimensionGroup SDG
		    ON PSG.StorageDimensionGroup = SDG.RecId 
		   AND PSG.Partition = SDG.Partition 
	 LEFT JOIN dbo.EcoResTrackingDimensionGroupProduct PTD
	   ON ITM.RecId = PTD.Product
	  AND ITM.Partition = PTD.Partition
	      LEFT JOIN dbo.EcoResTrackingDimensionGroup TDG
		    ON PTD.TrackingDimensionGroup = TDG.RecId 
		   AND PTD.Partition = TDG.Partition 
	 LEFT JOIN (-- product retail category 
	   SELECT PCA.Product 
	   , PCA.Partition 
	   , CAT.Code CategoryCode 
	   , CAT.Name CategoryName
	   , PCA.Category RetailCategoryRecId
	   FROM dbo.EcoResProductCategory PCA
	        JOIN dbo.EcoResProductRetailCategoryHierarchy RCH
			  ON PCA.CategoryHierarchy = RCH.RetailCategoryHierarchy
			 AND PCA.Product = RCH.Product 
			 AND PCA.Partition = RCH.Partition 
			JOIN dbo.EcoResCategory CAT
			  ON PCA.Category = CAT.RecId 
			 AND PCA.Partition = CAT.Partition 
		 ) PCA
	   ON ITM.RecId = PCA.Product 
	  AND ITM.Partition = PCA.Partition 
	  LEFT JOIN dbo.RetailKit KIT
	    ON ITM.RecId = KIT.ProductMaster 
	   AND ITM.Partition = KIT.Partition
WHERE NOT(ITM.InstanceRelationType  =  4211)

As can be seen, the logic is quite complex and only half of the tables were used. There will be entities even more complex than this (the query was restructured for understandability). On the other hand, there will be environments where only half from the above tables will be used (e.g., when no inventory and/or dimensions are needed). 

To test the view, one just needs to change the schema referenced in the logic. The view is created under the TDM (Table Data Model) schema, so there should be no issues as long the schemas are used correctly. However, when duplicating logic, one should check whether the values match, respectively whether the objects have the same record count:

-- checking whether the values match for an example
SELECT * 
FROM TDM.vEcoResProductV2Entity
WHERE ProductNumber = '0169'

SELECT * 
FROM EDM.vEcoResProductV2Entity
WHERE ProductNumber = '0169'


-- checking the number of records
SELECT count(*)
FROM TDM.vEcoResProductV2Entity

SELECT count(*)
FROM EDM.vEcoResProductV2Entity

There are several optimizations or restructuring opportunities in the logic, e.g., the 'product retail category' subquery can be created as a separate view. Also, the groupings of two tables for the 'dimension groups' could be encapsulated individually in views. This can increase the number of views created considerably, though upon case the views could prove to be useful for troubleshooting or reuse. 

Moreover, in environments with only a partition the constraints on the respective field could be removed (important decision though), while the value for the retail category could be hardcoded. Even if hardcoding values should be in general avoided, it's needed to compromise when there are more important constraints. 

One way to reduce logic's complexity is to create a "base view" for each table in which the needed transformations are made, respectively only the needed columns are used. Reducing the number of columns simplifies the identification of attributes. For example, the base view for dbo.EcoResProduct could be written as follows:

CREATE OR ALTER VIEW [TDM].[vEcoResProduct]
AS
/*
name: Products - master data (base view)
created: 01.04.2021
modified: 01.04.2021
*/
SELECT PRD.DisplayProductNumber ProductNumber
, PRD.ProductType 
, CASE PRD.ProductType
    WHEN 1 THEN 'Product'
    WHEN 2 THEN 'ProductMaster'
    WHEN 3 THEN 'ProductVariant'
  END ProductTypeName
, CASE WHEN PRD.INSTANCERELATIONTYPE = 15969 THEN 1 ELSE 2 END ProductSubtype 
, CASE 
    WHEN PRD.INSTANCERELATIONTYPE = 15969 THEN 'Item'
    ELSE 'Product master'  
   END ProductSubtypeName
, PRD.RetailSizeGroupId
, PRD.RetailColorGroupId
, PRD.RetailStyleGroupId
, PRD.RetailFlavorGroupId
, PRD.VariantConfigurationTechnology 
, CASE PRD.VariantConfigurationTechnology
    WHEN 0 THEN 'None'
    WHEN 1 THEN 'PredefinedVariants'
    WHEN 2 THEN 'DimensionBased'
    WHEN 3 THEN 'RuleBased'
    WHEN 4 THEN 'ConstraintBased'
  END VariantConfigurationTechnologyName
, PRD.IsProductVariantUnitConversionEnabled
, CASE PRD.IsProductVariantUnitConversionEnabled WHEN 1 THEN 'Yes' ELSE 'No' END IsProductVariantUnitConversionEnabledName
-- system
, PRD.RecId 
, PRD.Partition 
, PRD.ModifiedBy
FROM dbo.EcoResProduct PRD
WHERE NOT(PRD.InstanceRelationType  =  4211)
GO

A similar approach can be used for each table, or at least the tables that need further transformations. There are several choices - ideally one should find a good-enough approach and stick to it. There will be also exceptions, though the general design should prevail!

The bottom line, when using the Export to Data Lake add-in (*), as only the export of tables is supported, the logic can become occasionally complex, though still manageable (in other areas the tables are straightforward to use). When using Link to Data Lake, one can rely on data entities, as long they are synchronized correctly. However, one still needs to rely on tables to fill the gaps. 

Note:
(*) Existing customers have until 1-Nov-2024 to transition from Export to Data lake to Synapse link. Microsoft advises new customers to use Synapse Link.

Happy coding!

Previous Post << ||>> Next Post

Previous Post <<||>> Next Post

09 October 2023

🪄💫SSRS (& Paginated Reports): Dynamics 365 F&O (Part I: Products Master Report via Data Entities)

With the Export to Data Lake add-in (*), respectively with Azure Synapse Link for Dataverse with Azure Data Lake, it's possible now to create Power BI paginated reports based directly on the Dynamics 365 for Finance & Operations (D365 FO) data model as available in the Data Lake. With the add-in it's possible to export the tables from D365 FO, while with the second option it's possible to export data entities and hopefully soon base tables as well. 

The two features can be used in parallel in the same environment and even if they are based on different mechanisms, they synchronize D365 FO data to the Data Lake in standard data files that can be further consumed via a serverless/dedicated SQL server pool by reporting tools. Thus, one can build a data lakehouse/warehouse on top of the data for reporting. At least this was the approach before Microsoft Fabric. 

So, in the end a developer will have the option to choose between base tables and data entities, as synchronized to the Data Lake. Data entities encapsulate the logic as used by the D365 FO application and can thus allow developing a report with the minimum of overhead, at least in theory. Unfortunately, there are other downsides that make the use of base tables a better approach, at least for certain scenarios (e.g. data entities are not available for synchronization or doesn't include all needed fields). 

For the developer of reports the synchronization mechanism is less important. A set of database objects will be available, and reports can be built on top of them. Ideally, the base objects will have the same or similar names as in D365 FO, otherwise the differences between the two data models need to be translated in each piece of logic, which adds more complexity and overhead to the development.

Thus, to build a report to show the Product master data the developer can use the dbo.EcoResProductV2Entity data entity. Unfortunately, there are several important issues: (1) the name of the source entity can change as versions change, (2) there are fields based on Enums that store only the code and it's needed to map the corresponding values, (3) some fields from the base table need to be made available, (4) further transformations are needed (e.g., converting fields to formats). Thus, it's useful to encapsulate the logic into a view, when creating views is possible:

CREATE OR ALTER VIEW EDM.vEcoResProductV2Entity
AS 
/*
name:  Products - Master (base entity)
created: 01.04.2021
modified: 01.04.2021
*/
SELECT ITM.ProductType
, CASE ITM.ProductType
    WHEN 1 THEN 'Item'
    WHEN 2 THEN 'Service'
  END ProductTypeName
, ITM.ProductSubtype
, CASE ITM.ProductSubtype
    WHEN 1 THEN 'Product'
    WHEN 2 THEN 'Product Master'
    WHEN 3 THEN 'Product Variant'
   END ProductSubtypeName
, ITM.ProductNumber
, Replace(Replace(ITM.ProductName, char(10), ' '), char(13), ' ')  ProductName
, Replace(Replace(ITM.ProductDescription, char(10), ' '), char(13), ' ') ProductDescription
, ITM.RetailProductCategoryName 
, ITM.ProductDimensionGroupName 
, ITM.StorageDimensionGroupName 
, ITM.TrackingDimensionGroupName 
, ITM.ProductColorGroupId 
, ITM.ProductSizeGroupId 
, ITM.ProductStyleGroupId
, ITM.VariantConfigurationTechnology
, CASE ITM.VariantConfigurationTechnology
    WHEN 0 THEN 'None'
    WHEN 1 THEN 'Predefined Variants'
    WHEN 2 THEN 'Dimension Based'
    WHEN 3 THEN 'Rule Based'
    WHEN 4 THEN 'Constraint Based'
  END VariantConfigurationTechnologyName
, ITM.IsProductKit
, CASE ITM.IsProductKit WHEN 1 THEN 'Yes' ELSE 'No' END IsProductKitName
, ITM.IsCatchWeightproduct
, CASE ITM.IsCatchWeightproduct WHEN 1 THEN 'Yes' ELSE 'No' END IsCatchWeightproductName
, ITM.IsProductVariantUnitConversionEnabled
, CASE ITM.IsProductVariantUnitConversionEnabled WHEN 1 THEN 'Yes' ELSE 'No' END IsProductVariantUnitConversionEnabledName
-- system
, ITM.ProductDimensionGroupRecId
, ITM.StorageDimensionGroupRecId
, ITM.TrackingDimensionGroupRecId
, ITM.RetailCategoryRecId
, ITM.RecId
, ITM.Partition
FROM dbo.EcoResProductV2Entity ITM

As can be seen, the view was created in the EDM (Entity Data Model) and has the "v" prefix added to the original name. The EDM schema was created to store the objects based on data coming via data entities.

With this view's data can be consumed in the paginated report:

--Products - Master
SELECT ITM.ProductTypeName
, ITM.ProductSubtypeName
, ITM.ProductNumber
, ITM.ProductName
, ITM.RetailProductCategoryName 
, ITM.ProductDimensionGroupName 
, ITM.StorageDimensionGroupName 
, ITM.TrackingDimensionGroupName 
, ITM.ProductColorGroupId 
, ITM.ProductSizeGroupId 
, ITM.ProductStyleGroupId
, ITM.VariantConfigurationTechnologyName
, ITM.IsProductKitName
, ITM.IsCatchWeightproductName
FROM EDM.vEcoResProductV2Entity ITM
WHERE ITM.ProductName LIKE 'Mens%'
ORDER BY ITM.ProductNumber

One can use directly this query to generate the report, though in a second step, once the report was created, one might prefer to provide the query as variable to the corresponding dataset, as this allows better handling of the parameters and thus create the query dynamically as fit:

= "--Products - Master" & vbCrLf 
& "SELECT ITM.ProductTypeName" & vbCrLf 
& ", ITM.ProductSubtypeName" & vbCrLf 
& ", ITM.ProductNumber" & vbCrLf 
& ", ITM.ProductName" & vbCrLf 
& ", ITM.RetailProductCategoryName " & vbCrLf 
& ", ITM.ProductDimensionGroupName " & vbCrLf 
& ", ITM.StorageDimensionGroupName " & vbCrLf 
& ", ITM.TrackingDimensionGroupName " & vbCrLf 
& ", ITM.ProductColorGroupId " & vbCrLf 
& ", ITM.ProductSizeGroupId " & vbCrLf 
& ", ITM.ProductStyleGroupId" & vbCrLf 
& ", ITM.VariantConfigurationTechnologyName" & vbCrLf 
& ", ITM.IsProductKitName" & vbCrLf 
& ", ITM.IsCatchWeightproductName" & vbCrLf 
& ", ITM.IsProductVariantUnitConversionEnabledName" & vbCrLf 
& "FROM EDM.vEcoResProductV2Entity ITM" & vbCrLf 
& "WHERE 0=0 " & vbCrLf 
& IIf(Parameters!ProductName.Value<> "", " AND ITM.ProductName LIKE @ProductName ", "") & vbcrlf 
& IIf(Parameters!ProductNumber.Value<> "", " AND ITM.ProductNumber LIKE @ProductNumber", "") & vbcrlf 
& IIf(Parameters!ProductType.Value<> "", " 	AND ITM.ProductType = @ProductType", "") & vbcrlf 
& IIf(Parameters!ProductSubtype.Value<> "", " AND ITM.ProductSubtype = @ProductSubtype ", "") & vbcrlf 
& "ORDER BY ITM.ProductNumber" & vbCrLf

The expression can be created using an Excel formula (after vbCrLf observer the empty space needs to be set, otherwise the report will fail):
="& """ & A1 & """ & vbCrLf 

The four parameters created for exemplification need to be defined also in the dataset accordingly. And here's report's output:

For Product Number and Name the User can use wildcards - that's what the "*" besides parameter's Prompt signifies.

Unfortunately, the logic needs to be extended accordingly as soon as further attributes not available in the data entities are needed. In this scenario and several others, using directly the base tables allows more flexibility even if the logic behind the data entity needs to be duplicated.

Notes:
(1) When building and testing the report use only a small subset of the data. 
(2) For the dropdown parameters the "(all)" label was added which has an empty value. It allows in the expression to ignore the parameter, when selected. 
(3) In text values with a high probability of coming from Excel sheets (e.g. Product Name or Description) it might be a good idea to replace char (10) line feed, char (13) ENTER, and even char (9) horizontal Tab values, otherwise the data might not be exported correctly to Excel.
(4) If the definitions for Enums change, the mappings need to be changed accordingly. 
(*) Existing customers have until 1-Nov-2024 to transition from Export to Data lake to Synapse link. Microsoft advises new customers to use Synapse Link.

Happy coding!

Previous Post <<||>> Next Post

Previous Post <<||>> Next Post

07 October 2023

🔦Process Management: "Manage Problem" Process Diagram in ITIL

Process diagrams for IT methodologies like ITIL can be approached in general at a lower level of detail than business processes (see 'Create Product' process diagram) and thus the text blocks can be left out unless further high-level instructions need to be given. Because they are highly standardized, one can find many examples on the internet as inspiration. On the other hand, the processes need to be adapted to an organization's needs. 



Compared with other similar process diagrams, the diagram attempts (1) to highlight also the interfaces with other processes (e.g. Manage Change, Manage Knowledge, etc.), (2) to assure that the User can cycle through the steps, respectively that there's no infinite loop via the solvability question. 

The following definitions apply:
Change: the addition, modification or removal of anything that could have an effect on a servicel
Incident: unplanned interruption or reduction in quality.
Known Error: problem that has a documented root cause and/or a workaround.
Problem: a cause of one or more incidents.
Resolution: action taken to repair the root cause of an incident/problem or to implement a workaround.
Workaround: reducing/eliminating the impact of an incident/problem for which a full resolution is not yet available.


06 October 2023

🔦💫Process Management: "Create Product" Process Diagram in Dynamics 365 for Finance & Operations

Introduction

A process diagram should provide employees with a bird’s-eye view of the most important steps needed to perform the process it describes. To be useful, the diagram needs to be succinct, complete, accurate and descriptive enough. Unfortunately, one needs to compromise to address all these requirements. Moreover, there are further challenges, like where to set boundaries between activities and subprocesses, or how much information to provide.

Dynamics 365 used to come with a set of standardized process descriptions and diagrams, at least the support for the latter being interrupted. They were useful as overviews, however sometimes they seemed to raise more questions than to clarify. On the other hand, organizations implement only a subset from the functionality available, and thus the process diagrams can vary between organizations. In theory, the implementer or other service providers could help with a standardized set of process diagrams designed for specific industries, though this may involve further challenges.

Therefore, organizations might be forced to start from scratch. Even then, the results might not fulfill the expectations. At least in what process diagrams concern, there seem to be a huge gap between theory and practice. Knowledge representation in its various forms, and the process diagrams are included, can be considered as an art or require more expertise and skills than usual.

Ideally, organizations should have process diagrams for all business-critical processes. A more relaxed approach could focus only on the important processes that need to be performed occasionally, and for which refreshments are necessary. In this category belongs the creation of master data, the creation of Products being maybe the most complex one.

The ‘Create Product’ process was chosen to exemplify how a process diagram could be constructed and explain design choices and further aspects. (Click on the diagram to see the full-size version!)

Create Product process diagram


As can be seen, the diagram starts with two subprocesses often omitted, even if they are quintessential for making sure that the next steps can be executed efficiently. The differentiation between activities and subprocesses was made based on the complexity of the steps and the responsibilities involved. When multiple steps need to be performed by other personas, then this might be a sign that a process or subprocess is involved. When other personas are involved, the blocks have different colors.

Another important aspect is the use of succinct descriptions for each step. The building blocks of the diagrams should be in theory enough, though that’s seldom the case. To fill the gap the employee needs to navigate between the blocks and descriptions, which is usually inefficient. Process management applications provide a better UI, though contents’ navigability can be challenging as well.

Even if the diagram attempts to generalize a Product’s creation, seldom performed activities were left out and can be added after the same model. Optional steps are marked by a decision block reflecting thus the questions a persona needs to answer. They could be left out.

At least in D365 the data can be imported over the Data Management Framework and/or the Excel add-in. Some steps can then be consolidated or split depending on which data entities are used, though the variations in process are small. Ideally, there should be a description of the respective steps (e.g., what data entity applies for each step).

The process doesn’t consider the use of an approval workflow, respectively the newest features.

One might argue that the diagram doesn’t respect maybe some of the conventions existing in Process Management. Some conventions make sense, though also in this area one needs to compromise sometimes.

In what follows are given arguments why the various steps were considered.

Approve Product datasheet

The data needed to create a Product usually comes from several departments (e.g. Sales, Procurement, Inventory Management, Engineering, etc.). Therefore, as for other master data, it’s recommended to have a Product datasheet in which the most important attributes about a product are tracked at the various levels of detail that apply. 

This approach is supposed to fill the gap in the process in which the creation of a product needs to be approved (e.g. somebody needs to confirm that there’s an entitled reason) to eliminate the unnecessary creation of products (incl. duplicates). Also, there are attributes like Name, Description, Unit of Measure or Prices that need further agreement. Moreover, in this way a single persona is responsible for process’ execution, and the approach requires more coordination upfront than within the process. (It’s easier to have a call with all stakeholders to complete the list than trying to involve them in the middle of the process.)

The datasheet should also contain the attributes that might require system’s extension with further setup, and the new values should be marked as such.

Ideally, the datasheet should reflect the data structure of the entities needed by the import mechanism or allow easy conversions to them.

Setup System

Setup changes may reach deep inside several modules, requiring further permissions. Given the sensitive nature of the changes, it’s better for these changes to be performed by the people responsible for the respective areas.

Some changes, including the hypothetical ones, might also require further tests. Therefore, this part of the process should be triggered early enough so the delays are kept to a minimum.  

System’s setup should be ideally documented (e.g. via golden configuration) together with the policies that apply.

Create Product(s)

The Product datasheet will serve as basis for creating the Products during the current and the following steps. The data entry should be just a replication of the data from datasheet without further transformations, which tend to increase the chances for mistakes.

Supposing that D365 is the master system, which usually should be the case, the products can also be created then in third-party systems once the Product number is available, systems in which further restrictions, policies and value mappings may apply.   

Ideally, there should be an automatic interface responsible for data synchronization, otherwise manual effort is involved.

Add language specific names/descriptions

Maintaining both, a Product’s name and descriptions in the various supported languages should be mandatory. The attributes should reflect the level of specificity required.

Assign Product dimensions

The Product sheet should reflect whether the Product requires dimensions, and which are the respective values, respectively the combinations allowed.

Maintain Product categories

Categories’ maintenance is usually performed by other roles and belongs in the Setup System process. This step includes the assignment of Products to a category, respectively the maintenance of further attributes like Main account or Item sales tax group.

Release Product(s)

The product(s) available in the datasheet are released to the Legal entities in scope. This just makes the Products available for further maintenance.

Update Released product(s)

The Product datasheet is used as basis for entering the Legal entity-specific attributes.

Update Default order settings

This step requires updating the attributes that deviate from defaults (e.g. Default Site, Stopped, etc.).

Maintain Product ext. Descriptions

Product external information might be needed for Customers and Vendors to which the Products are sold, respectively from which are purchased. The entries are needed for each Product dimension that applies.

Maintain Trade agreements

Trade agreements allow transparency of the Sales and Procurement prices that apply for a time interval, specific group or other characteristics. Therefore, they should be used when possible.

Maintain Bar codes

Bar codes apply usually for inventory-based products. Multiple bar code types may apply.

Validate Product master data

This step involves a review of the master data just entered, though it can involve an interface to the ‘Monitor Master data’ process, when such a process was defined as part of the Data Management or Governance initiative. The interfaced process could be triggered as part of the initial process or as part of regular checks, especially when the policies changed.

Typically, the validation of the Product master data should be done by other persona after the four-eyes principle.

Having a set of reports with all attributes in scope (aka ‘Product master data reports’) can easily pinpoint where the gaps are. Moreover, the ‘Product master data policies’ could be built within these reports however this is a long shot. If the policies are known, a simple review should be enough.  

Correct Product data

Besides troubleshooting, this step involves reviewing several or all the steps performed before and taking the necessary actions. Ideally, should be available a list of the most frequently met scenarios, respectively of fixes and workarounds.

Previous post <<||>> Next post

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 25 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.