10 October 2023

🪄💫SSRS (& Paginated Reports): Products Master Report in Dynamics 365 for Finance & Operations via Base Tables

As mentioned in the previous post, building the Products Master paginated report via the base D365 FO tables synchronized in Data Lake involves more effort and implies duplicating the logic from the used data entity(-ies). One can attempt recreating the entities 1:1 as they are defined in the D365 database (an SQL Server Azure database), however there are high the chances that in certain areas (e.g. financial dimensions) the features used in building the logic are not supported. Moreover, it might be the case that only a subset of the functionality was implemented, and with this, pieces of logic becoming obsolete. 

There's also a limitation on the number of tables which can be synchronized, and therefore one needs to compromise. In the dbo.EcoResProductV2Entity there are about 20 tables involved, though probably only half of them are used. The base table dbo.EcoResProduct contains most of the fields available also in the data entity, though the names for the various attributes need to be retrieved from the various tables. 

Some of the issues met in working with data entities appear in this scenario as well: (2) there are fields based on Enums that store only the code and it's needed to map the corresponding values, (4) further transformations are needed (e.g., converting fields to formats). It makes sense to encapsulate similarly the logic in views, though this time one or two layers of logic are needed in addition. One can choose to do this for each table in scope, or replicate data entity's logic by focusing only on the needed parts. Both approaches will prove to be challenging in certain scenarios. Let's consider the latter:

CREATE OR ALTER VIEW TDM.vEcoResProductV2Entity
AS 
/*
name:  Products - Master (base view)
created: 01.04.2021
modified: 01.04.2021
*/
SELECT ITM.ProductType
, CASE ITM.ProductType
    WHEN 1 THEN 'Item'
	WHEN 2 THEN 'Service'
  END ProductTypeName
, CAST ((CASE WHEN ITM.InstanceRelationType = 13678 THEN 1 ELSE 2 END) AS INT) ProductSubtype
, CASE 
    WHEN ITM.InstanceRelationType = 13678 THEN 'Product'
	ELSE 'Product Master'
   END ProductSubtypeName
, ITM.DisplayProductNumber ProductNumber
, Replace(Replace(ILT.Name, char(10), ' '), char(13), ' ')  ProductName
, Replace(Replace(ILT.Description, char(10), ' '), char(13), ' ') ProductDescription
, PCA.CategoryName RetailProductCategoryName 
, PCA.CategoryCode RetailProductCategoryCode
, PDG.Name ProductDimensionGroupName 
, SDG.Name StorageDimensionGroupName 
, TDG.Name TrackingDimensionGroupName 
, ITM.RetailColorGroupId ProductColorGroupId 
, ITM.RetailSizeGroupId ProductSizeGroupId 
, ITM.RetailStyleGroupId ProductStyleGroupId
, ITM.VariantConfigurationTechnology
, CASE ITM.VariantConfigurationTechnology
	WHEN 0 THEN 'None'
	WHEN 1 THEN 'Predefined Variants'
    WHEN 2 THEN 'Dimension Based'
	WHEN 3 THEN 'RuleBased'
	WHEN 4 THEN 'Constraint Based'
  END VariantConfigurationTechnologyName
, CASE WHEN KIT.ProductMaster IS NOT NULL THEN 1 ELSE 0 END IsProductKit
, CASE WHEN KIT.ProductMaster IS NOT NULL  THEN 'Yes' ELSE 'No' END IsProductKitName
, ITM.PDSCWProduct IsCatchWeightproduct
, CASE ITM.PDSCWProduct WHEN 1 THEN 'Yes' ELSE 'No' END IsCatchWeightproductName
, ITM.IsProductVariantUnitConversionEnabled
, CASE ITM.IsProductVariantUnitConversionEnabled WHEN 1 THEN 'Yes' ELSE 'No' END IsProductVariantUnitConversionEnabledName
-- system
, PPD.ProductDimensionGroup ProductDimensionGroupRecId
, PSG.StorageDimensionGroup StorageDimensionGroupRecId
, PTD.TrackingDimensionGroup TrackingDimensionGroupRecId
, PCA.RetailCategoryRecId
, ITM.RecId
, ITM.Partition
FROM dbo.EcoResProduct ITM
     LEFT JOIN dbo.EcoResProductTranslation ILT
	   ON ITM.RecId = ILT.Product 
	  AND ITM.Partition = ILT.Partition 
	  AND ILT.LanguageId = 'en-us'
     LEFT JOIN dbo.EcoResProductDimensionGroupProduct PPD
	   ON ITM.RecId = PPD.Product
	  AND ITM.Partition = PPD.Partition
	      LEFT JOIN dbo.EcoResProductDimensionGroup PDG
		    ON PPD.ProductDimensionGroup = PDG.RecId 
		   AND PPD.Partition = PDG.Partition 
	 LEFT JOIN dbo.EcoResStorageDimensionGroupProduct PSG
	   ON ITM.RecId = PSG.Product
	  AND ITM.Partition = PSG.Partition
	      LEFT JOIN dbo.EcoResStorageDimensionGroup SDG
		    ON PSG.StorageDimensionGroup = SDG.RecId 
		   AND PSG.Partition = SDG.Partition 
	 LEFT JOIN dbo.EcoResTrackingDimensionGroupProduct PTD
	   ON ITM.RecId = PTD.Product
	  AND ITM.Partition = PTD.Partition
	      LEFT JOIN dbo.EcoResTrackingDimensionGroup TDG
		    ON PTD.TrackingDimensionGroup = TDG.RecId 
		   AND PTD.Partition = TDG.Partition 
	 LEFT JOIN (-- product retail category 
	   SELECT PCA.Product 
	   , PCA.Partition 
	   , CAT.Code CategoryCode 
	   , CAT.Name CategoryName
	   , PCA.Category RetailCategoryRecId
	   FROM dbo.EcoResProductCategory PCA
	        JOIN dbo.EcoResProductRetailCategoryHierarchy RCH
			  ON PCA.CategoryHierarchy = RCH.RetailCategoryHierarchy
			 AND PCA.Product = RCH.Product 
			 AND PCA.Partition = RCH.Partition 
			JOIN dbo.EcoResCategory CAT
			  ON PCA.Category = CAT.RecId 
			 AND PCA.Partition = CAT.Partition 
		 ) PCA
	   ON ITM.RecId = PCA.Product 
	  AND ITM.Partition = PCA.Partition 
	  LEFT JOIN dbo.RetailKit KIT
	    ON ITM.RecId = KIT.ProductMaster 
	   AND ITM.Partition = KIT.Partition
WHERE NOT(ITM.InstanceRelationType  =  4211)

As can be seen, the logic is quite complex and only half of the tables were used. There will be entities even more complex than this (the query was restructured for understandability). On the other hand, there will be environments where only half from the above tables will be used (e.g., when no inventory and/or dimensions are needed). 

To test the view, one just needs to change the schema referenced in the logic. The view is created under the TDM (Table Data Model) schema, so there should be no issues as long the schemas are used correctly. However, when duplicating logic, one should check whether the values match, respectively whether the objects have the same record count:

-- checking whether the values match for an example
SELECT * 
FROM TDM.vEcoResProductV2Entity
WHERE ProductNumber = '0169'

SELECT * 
FROM EDM.vEcoResProductV2Entity
WHERE ProductNumber = '0169'


-- checking the number of records
SELECT count(*)
FROM TDM.vEcoResProductV2Entity

SELECT count(*)
FROM EDM.vEcoResProductV2Entity

There are several optimizations or restructuring opportunities in the logic, e.g., the 'product retail category' subquery can be created as a separate view. Also, the groupings of two tables for the 'dimension groups' could be encapsulated individually in views. This can increase the number of views created considerably, though upon case the views could prove to be useful for troubleshooting or reuse. 

Moreover, in environments with only a partition the constraints on the respective field could be removed (important decision though), while the value for the retail category could be hardcoded. Even if hardcoding values should be in general avoided, it's needed to compromise when there are more important constraints. 

One way to reduce logic's complexity is to create a "base view" for each table in which the needed transformations are made, respectively only the needed columns are used. Reducing the number of columns simplifies the identification of attributes. For example, the base view for dbo.EcoResProduct could be written as follows:

CREATE OR ALTER VIEW [TDM].[vEcoResProduct]
AS
/*
name: Products - master data (base view)
created: 01.04.2021
modified: 01.04.2021
*/
SELECT PRD.DisplayProductNumber ProductNumber
, PRD.ProductType 
, CASE PRD.ProductType
    WHEN 1 THEN 'Product'
    WHEN 2 THEN 'ProductMaster'
    WHEN 3 THEN 'ProductVariant'
  END ProductTypeName
, CASE WHEN PRD.INSTANCERELATIONTYPE = 15969 THEN 1 ELSE 2 END ProductSubtype 
, CASE 
    WHEN PRD.INSTANCERELATIONTYPE = 15969 THEN 'Item'
    ELSE 'Product master'  
   END ProductSubtypeName
, PRD.RetailSizeGroupId
, PRD.RetailColorGroupId
, PRD.RetailStyleGroupId
, PRD.RetailFlavorGroupId
, PRD.VariantConfigurationTechnology 
, CASE PRD.VariantConfigurationTechnology
    WHEN 0 THEN 'None'
    WHEN 1 THEN 'PredefinedVariants'
    WHEN 2 THEN 'DimensionBased'
    WHEN 3 THEN 'RuleBased'
    WHEN 4 THEN 'ConstraintBased'
  END VariantConfigurationTechnologyName
, PRD.IsProductVariantUnitConversionEnabled
, CASE PRD.IsProductVariantUnitConversionEnabled WHEN 1 THEN 'Yes' ELSE 'No' END IsProductVariantUnitConversionEnabledName
-- system
, PRD.RecId 
, PRD.Partition 
, PRD.ModifiedBy
FROM dbo.EcoResProduct PRD
WHERE NOT(PRD.InstanceRelationType  =  4211)
GO

A similar approach can be used for each table, or at least the tables that need further transformations. There are several choices - ideally one should find a good-enough approach and stick to it. There will be also exceptions, though the general design should prevail!

The bottom line, when using the Export to Data Lake add-in (*), as only the export of tables is supported, the logic can become occasionally complex, though still manageable (in other areas the tables are straightforward to use). When using Link to Data Lake, one can rely on data entities, as long they are synchronized correctly. However, one still needs to rely on tables to fill the gaps. 

Note:
(*) Existing customers have until 1-Nov-2024 to transition from Export to Data lake to Synapse link. Microsoft advises new customers to use Synapse Link.

Happy coding!

Previous Post << ||>> Next Post

09 October 2023

🪄💫SSRS (& Paginated Reports): Products Master Report in Dynamics 365 for Finance & Operations via Data Entities

With the Export to Data Lake add-in (*), respectively with Azure Synapse Link for Dataverse with Azure Data Lake, it's possible now to create Power BI paginated reports based directly on the Dynamics 365 Finance & Operations (D365 FO) data model as available in the Data Lake. With the add-in it's possible to export the tables from D365 FO, while with the second option it's possible to export data entities and hopefully soon base tables as well. 

The two features can be used in parallel in the same environment and even if they are based on different mechanisms, they synchronize D365 FO data to the Data Lake in standard data files that can be further consumed via a serverless/dedicated SQL server pool by reporting tools. Thus, one can build a data lakehouse/warehouse on top of the data for reporting. At least this was the approach before Microsoft Fabric. 

So, in the end a developer will have the option to choose between base tables and data entities, as synchronized to the Data Lake. Data entities encapsulate the logic as used by the D365 FO application and can thus allow developing a report with the minimum of overhead, at least in theory. Unfortunately, there are other downsides that make the use of base tables a better approach, at least for certain scenarios (e.g. data entities are not available for synchronization or doesn't include all needed fields). 

For the developer of reports the synchronization mechanism is less important. A set of database objects will be available, and reports can be built on top of them. Ideally, the base objects will have the same or similar names as in D365 FO, otherwise the differences between the two data models need to be translated in each piece of logic, which adds more complexity and overhead to the development.

Thus, to build a report to show the Product master data the developer can use the dbo.EcoResProductV2Entity data entity. Unfortunately, there are several important issues: (1) the name of the source entity can change as versions change, (2) there are fields based on Enums that store only the code and it's needed to map the corresponding values, (3) some fields from the base table need to be made available, (4) further transformations are needed (e.g., converting fields to formats). Thus, it's useful to encapsulate the logic into a view, when creating views is possible:

CREATE OR ALTER VIEW EDM.vEcoResProductV2Entity
AS 
/*
name:  Products - Master (base entity)
created: 01.04.2021
modified: 01.04.2021
*/
SELECT ITM.ProductType
, CASE ITM.ProductType
    WHEN 1 THEN 'Item'
    WHEN 2 THEN 'Service'
  END ProductTypeName
, ITM.ProductSubtype
, CASE ITM.ProductSubtype
    WHEN 1 THEN 'Product'
    WHEN 2 THEN 'Product Master'
    WHEN 3 THEN 'Product Variant'
   END ProductSubtypeName
, ITM.ProductNumber
, Replace(Replace(ITM.ProductName, char(10), ' '), char(13), ' ')  ProductName
, Replace(Replace(ITM.ProductDescription, char(10), ' '), char(13), ' ') ProductDescription
, ITM.RetailProductCategoryName 
, ITM.ProductDimensionGroupName 
, ITM.StorageDimensionGroupName 
, ITM.TrackingDimensionGroupName 
, ITM.ProductColorGroupId 
, ITM.ProductSizeGroupId 
, ITM.ProductStyleGroupId
, ITM.VariantConfigurationTechnology
, CASE ITM.VariantConfigurationTechnology
    WHEN 0 THEN 'None'
    WHEN 1 THEN 'Predefined Variants'
    WHEN 2 THEN 'Dimension Based'
    WHEN 3 THEN 'Rule Based'
    WHEN 4 THEN 'Constraint Based'
  END VariantConfigurationTechnologyName
, ITM.IsProductKit
, CASE ITM.IsProductKit WHEN 1 THEN 'Yes' ELSE 'No' END IsProductKitName
, ITM.IsCatchWeightproduct
, CASE ITM.IsCatchWeightproduct WHEN 1 THEN 'Yes' ELSE 'No' END IsCatchWeightproductName
, ITM.IsProductVariantUnitConversionEnabled
, CASE ITM.IsProductVariantUnitConversionEnabled WHEN 1 THEN 'Yes' ELSE 'No' END IsProductVariantUnitConversionEnabledName
-- system
, ITM.ProductDimensionGroupRecId
, ITM.StorageDimensionGroupRecId
, ITM.TrackingDimensionGroupRecId
, ITM.RetailCategoryRecId
, ITM.RecId
, ITM.Partition
FROM dbo.EcoResProductV2Entity ITM

As can be seen, the view was created in the EDM (Entity Data Model) and has the "v" prefix added to the original name. The EDM schema was created to store the objects based on data coming via data entities.

With this view's data can be consumed in the paginated report:

--Products - Master
SELECT ITM.ProductTypeName
, ITM.ProductSubtypeName
, ITM.ProductNumber
, ITM.ProductName
, ITM.RetailProductCategoryName 
, ITM.ProductDimensionGroupName 
, ITM.StorageDimensionGroupName 
, ITM.TrackingDimensionGroupName 
, ITM.ProductColorGroupId 
, ITM.ProductSizeGroupId 
, ITM.ProductStyleGroupId
, ITM.VariantConfigurationTechnologyName
, ITM.IsProductKitName
, ITM.IsCatchWeightproductName
FROM EDM.vEcoResProductV2Entity ITM
WHERE ITM.ProductName LIKE 'Mens%'
ORDER BY ITM.ProductNumber

One can use directly this query to generate the report, though in a second step, once the report was created, one might prefer to provide the query as variable to the corresponding dataset, as this allows better handling of the parameters and thus create the query dynamically as fit:

= "--Products - Master" & vbCrLf 
& "SELECT ITM.ProductTypeName" & vbCrLf 
& ", ITM.ProductSubtypeName" & vbCrLf 
& ", ITM.ProductNumber" & vbCrLf 
& ", ITM.ProductName" & vbCrLf 
& ", ITM.RetailProductCategoryName " & vbCrLf 
& ", ITM.ProductDimensionGroupName " & vbCrLf 
& ", ITM.StorageDimensionGroupName " & vbCrLf 
& ", ITM.TrackingDimensionGroupName " & vbCrLf 
& ", ITM.ProductColorGroupId " & vbCrLf 
& ", ITM.ProductSizeGroupId " & vbCrLf 
& ", ITM.ProductStyleGroupId" & vbCrLf 
& ", ITM.VariantConfigurationTechnologyName" & vbCrLf 
& ", ITM.IsProductKitName" & vbCrLf 
& ", ITM.IsCatchWeightproductName" & vbCrLf 
& ", ITM.IsProductVariantUnitConversionEnabledName" & vbCrLf 
& "FROM EDM.vEcoResProductV2Entity ITM" & vbCrLf 
& "WHERE 0=0 " & vbCrLf 
& IIf(Parameters!ProductName.Value<> "", " AND ITM.ProductName LIKE @ProductName ", "") & vbcrlf 
& IIf(Parameters!ProductNumber.Value<> "", " AND ITM.ProductNumber LIKE @ProductNumber", "") & vbcrlf 
& IIf(Parameters!ProductType.Value<> "", " 	AND ITM.ProductType = @ProductType", "") & vbcrlf 
& IIf(Parameters!ProductSubtype.Value<> "", " AND ITM.ProductSubtype = @ProductSubtype ", "") & vbcrlf 
& "ORDER BY ITM.ProductNumber" & vbCrLf

The expression can be created using an Excel formula (after vbCrLf observer the empty space needs to be set, otherwise the report will fail):
="& """ & A1 & """ & vbCrLf 

The four parameters created for exemplification need to be defined also in the dataset accordingly. And here's report's output:

For Product Number and Name the User can use wildcards - that's what the "*" besides parameter's Prompt signifies.

Unfortunately, the logic needs to be extended accordingly as soon as further attributes not available in the data entities are needed. In this scenario and several others, using directly the base tables allows more flexibility even if the logic behind the data entity needs to be duplicated.

Notes:
(1) When building and testing the report use only a small subset of the data. 
(2) For the dropdown parameters the "(all)" label was added which has an empty value. It allows in the expression to ignore the parameter, when selected. 
(3) In text values with a high probability of coming from Excel sheets (e.g. Product Name or Description) it might be a good idea to replace char (10) line feed, char (13) ENTER, and even char (9) horizontal Tab values, otherwise the data might not be exported correctly to Excel.
(4) If the definitions for Enums change, the mappings need to be changed accordingly. 
(*) Existing customers have until 1-Nov-2024 to transition from Export to Data lake to Synapse link. Microsoft advises new customers to use Synapse Link.

Happy coding!

Previous Post <<||>> Next Post

07 October 2023

🔦Process Management: "Manage Problem" Process Diagram in ITIL

Process diagrams for IT methodologies like ITIL can be approached in general at a lower level of detail than business processes (see 'Create Product' process diagram) and thus the text blocks can be left out unless further high-level instructions need to be given. Because they are highly standardized, one can find many examples on the internet as inspiration. On the other hand, the processes need to be adapted to an organization's needs. 



Compared with other similar process diagrams, the diagram attempts (1) to highlight also the interfaces with other processes (e.g. Manage Change, Manage Knowledge, etc.), (2) to assure that the User can cycle through the steps, respectively that there's no infinite loop via the solvability question. 

The following definitions apply:
Change: the addition, modification or removal of anything that could have an effect on a servicel
Incident: unplanned interruption or reduction in quality.
Known Error: problem that has a documented root cause and/or a workaround.
Problem: a cause of one or more incidents.
Resolution: action taken to repair the root cause of an incident/problem or to implement a workaround.
Workaround: reducing/eliminating the impact of an incident/problem for which a full resolution is not yet available.


06 October 2023

🔦💫Process Management: "Create Product" Process Diagram in Dynamics 365 for Finance & Operations

Introduction

A process diagram should provide employees with a bird’s-eye view of the most important steps needed to perform the process it describes. To be useful, the diagram needs to be succinct, complete, accurate and descriptive enough. Unfortunately, one needs to compromise to address all these requirements. Moreover, there are further challenges, like where to set boundaries between activities and subprocesses, or how much information to provide.

Dynamics 365 used to come with a set of standardized process descriptions and diagrams, at least the support for the latter being interrupted. They were useful as overviews, however sometimes they seemed to raise more questions than to clarify. On the other hand, organizations implement only a subset from the functionality available, and thus the process diagrams can vary between organizations. In theory, the implementer or other service providers could help with a standardized set of process diagrams designed for specific industries, though this may involve further challenges.

Therefore, organizations might be forced to start from scratch. Even then, the results might not fulfill the expectations. At least in what process diagrams concern, there seem to be a huge gap between theory and practice. Knowledge representation in its various forms, and the process diagrams are included, can be considered as an art or require more expertise and skills than usual.

Ideally, organizations should have process diagrams for all business-critical processes. A more relaxed approach could focus only on the important processes that need to be performed occasionally, and for which refreshments are necessary. In this category belongs the creation of master data, the creation of Products being maybe the most complex one.

The ‘Create Product’ process was chosen to exemplify how a process diagram could be constructed and explain design choices and further aspects. (Click on the diagram to see the full-size version!)

Create Product process diagram


As can be seen, the diagram starts with two subprocesses often omitted, even if they are quintessential for making sure that the next steps can be executed efficiently. The differentiation between activities and subprocesses was made based on the complexity of the steps and the responsibilities involved. When multiple steps need to be performed by other personas, then this might be a sign that a process or subprocess is involved. When other personas are involved, the blocks have different colors.

Another important aspect is the use of succinct descriptions for each step. The building blocks of the diagrams should be in theory enough, though that’s seldom the case. To fill the gap the employee needs to navigate between the blocks and descriptions, which is usually inefficient. Process management applications provide a better UI, though contents’ navigability can be challenging as well.

Even if the diagram attempts to generalize a Product’s creation, seldom performed activities were left out and can be added after the same model. Optional steps are marked by a decision block reflecting thus the questions a persona needs to answer. They could be left out.

At least in D365 the data can be imported over the Data Management Framework and/or the Excel add-in. Some steps can then be consolidated or split depending on which data entities are used, though the variations in process are small. Ideally, there should be a description of the respective steps (e.g., what data entity applies for each step).

The process doesn’t consider the use of an approval workflow, respectively the newest features.

One might argue that the diagram doesn’t respect maybe some of the conventions existing in Process Management. Some conventions make sense, though also in this area one needs to compromise sometimes.

In what follows are given arguments why the various steps were considered.

Approve Product datasheet

The data needed to create a Product usually comes from several departments (e.g. Sales, Procurement, Inventory Management, Engineering, etc.). Therefore, as for other master data, it’s recommended to have a Product datasheet in which the most important attributes about a product are tracked at the various levels of detail that apply. 

This approach is supposed to fill the gap in the process in which the creation of a product needs to be approved (e.g. somebody needs to confirm that there’s an entitled reason) to eliminate the unnecessary creation of products (incl. duplicates). Also, there are attributes like Name, Description, Unit of Measure or Prices that need further agreement. Moreover, in this way a single persona is responsible for process’ execution, and the approach requires more coordination upfront than within the process. (It’s easier to have a call with all stakeholders to complete the list than trying to involve them in the middle of the process.)

The datasheet should also contain the attributes that might require system’s extension with further setup, and the new values should be marked as such.

Ideally, the datasheet should reflect the data structure of the entities needed by the import mechanism or allow easy conversions to them.

Setup System

Setup changes may reach deep inside several modules, requiring further permissions. Given the sensitive nature of the changes, it’s better for these changes to be performed by the people responsible for the respective areas.

Some changes, including the hypothetical ones, might also require further tests. Therefore, this part of the process should be triggered early enough so the delays are kept to a minimum.  

System’s setup should be ideally documented (e.g. via golden configuration) together with the policies that apply.

Create Product(s)

The Product datasheet will serve as basis for creating the Products during the current and the following steps. The data entry should be just a replication of the data from datasheet without further transformations, which tend to increase the chances for mistakes.

Supposing that D365 is the master system, which usually should be the case, the products can also be created then in third-party systems once the Product number is available, systems in which further restrictions, policies and value mappings may apply.   

Ideally, there should be an automatic interface responsible for data synchronization, otherwise manual effort is involved.

Add language specific names/descriptions

Maintaining both, a Product’s name and descriptions in the various supported languages should be mandatory. The attributes should reflect the level of specificity required.

Assign Product dimensions

The Product sheet should reflect whether the Product requires dimensions, and which are the respective values, respectively the combinations allowed.

Maintain Product categories

Categories’ maintenance is usually performed by other roles and belongs in the Setup System process. This step includes the assignment of Products to a category, respectively the maintenance of further attributes like Main account or Item sales tax group.

Release Product(s)

The product(s) available in the datasheet are released to the Legal entities in scope. This just makes the Products available for further maintenance.

Update Released product(s)

The Product datasheet is used as basis for entering the Legal entity-specific attributes.

Update Default order settings

This step requires updating the attributes that deviate from defaults (e.g. Default Site, Stopped, etc.).

Maintain Product ext. Descriptions

Product external information might be needed for Customers and Vendors to which the Products are sold, respectively from which are purchased. The entries are needed for each Product dimension that applies.

Maintain Trade agreements

Trade agreements allow transparency of the Sales and Procurement prices that apply for a time interval, specific group or other characteristics. Therefore, they should be used when possible.

Maintain Bar codes

Bar codes apply usually for inventory-based products. Multiple bar code types may apply.

Validate Product master data

This step involves a review of the master data just entered, though it can involve an interface to the ‘Monitor Master data’ process, when such a process was defined as part of the Data Management or Governance initiative. The interfaced process could be triggered as part of the initial process or as part of regular checks, especially when the policies changed.

Typically, the validation of the Product master data should be done by other persona after the four-eyes principle.

Having a set of reports with all attributes in scope (aka ‘Product master data reports’) can easily pinpoint where the gaps are. Moreover, the ‘Product master data policies’ could be built within these reports however this is a long shot. If the policies are known, a simple review should be enough.  

Correct Product data

Besides troubleshooting, this step involves reviewing several or all the steps performed before and taking the necessary actions. Ideally, should be available a list of the most frequently met scenarios, respectively of fixes and workarounds.

Previous post <<||>> Next post

03 October 2023

🧮ERP: Implementations (Part IV: Introducing an Upfront Proof-of-Concept Setup)

 

ERP Implementation

The standard phases of an ERP implementation are mandatory and inflexible as there seems to exist a imposed succession of the phases rooted in customer’s need of having an upfront cost estimate for the project. Moreover, the concept-based approach reflected in the creation of a set of Functional Design Documents (FDDs), even if it’s supposed to increase an implementation’s accuracy, it brings considerable challenges and an effort volume that could be spent in other areas. E.g., having a proof-of-concept setup subproject early in the project seems to bring more benefits.

Usually, before or during the requirements gathering phase the functional consultants together with the key users look at the legacy system(s) and data, questions are asked on both sides, and the findings are hopefully documented, though the outputs are high-level ideas or process design sketches. The sessions are abstract, and besides diagrams there’s no feedback mechanism to make sure that the parties understood customer’s processes and data structures, respectively that the key users understood what the future system is supposed to deliver. Some projects consider the building of 'AS-IS' diagrams and/or user stories during this phase, though their impact on project’s outcomes is questionable.

Why not include in this phase also hand-on training sessions for the key users during which a system is set up based on the available information? For example, one can start with an existing shell of the system reflecting standard parameters used in the industry where the customer works. Starting from this shell the key users and consultants go through the various processes and business scenarios, change parameters, add master data manually, sketch how the process could look like, respectively understand the gaps from expectations, or maybe how the process can be changed to avoid customizations. That’s more effective than discussing over and over the data structures and processes!

Of course, this seems to increase exploratory phase's complexity, though the increase is apparent. Allowing key users to understand how the target system works has the potential of simplifying project's planning and execution. Besides reaching a common understanding of the functionality, the key users can better evaluate whether the target system satisfies the high-level requirements, respectively better perform the various activities - requirements’ definition, reviews and user acceptance testing benefiting altogether. Moreover, they can train and involve other users earlier.

For this to work there are several assumptions. First, that the functional consultants know the target system(s), which is not necessarily needed in other approaches where a person (e.g. business analyst) who can understand how a system works and can document processes is enough. Second, the key users must have a good understanding of the legacy systems. Third, the shell should reflect the business needs as much as possible. Fourth, the necessary financial resources need to be made available upfront. Fifth, the business commitment must be there, and with this the key users should focus only on the project.

However, the most important aspect is that the parties involved need to buy and support the idea! The FDDs bring a safety net and make sense for both parties, the setup being performed only after the signoff. On the other hand, because of the considerable number of iterations FDDs involve high costs. Performing first the setup as described above and writing later the FDDs, if still needed, should improve FDDs’ quality, and require fewer iterations.

This approach allows an important volume of work to be done upfront, and even if further effort is needed for customizations and testing, a lower level of coordination is needed later, reducing thus the complexity of the planning and of the overall project.

Previous <<||>> Next

🧮ERP: Implementations (Part III: Simplifying the Implementation Project)

 

ERP Implementation

ERP implementations are complex projects and a way to manage their complexity is to attempt reducing their complexity (instead of answering to complexity by complexity). A project implementation’s methodology is probably the most important area that allows project’s simplification, though none of the available methodologies seems to work well with such projects.

The point that differentiates the various methodologies is solution’s conceptualization. In general, the expectation is to have a set of functional design documents (FDDs) that describe how the system operates and that can be used for programming the customizations, if any. The customer must review and sign-off the FDDs before the setup is done, respectively the development starts. Moreover, given the dependencies between documents, they often need to be signed off together.

Unfortunately, FDDs reflect the degree of understanding of the target system and business requirements, gaps that can prove to be a challenge for the parties involved, requiring many iterations until they are brought to the expected quality level. The higher the accuracy considered; the more iterations are needed. FDDs tend to consume a considerable percent of the available financial resources, in extremis the whole budget being exhausted just for 'printed paper'. Moreover, the key users see late in the project the working functionality.

In agile methodologies, FDDs are replaced by user stories, and, if still needed, can be written as part of the sprints or later. Unfortunately, agile methodologies have their own challenges and constraints in ERP implementations. As functionality is explored, understood, and negotiated with the customer during the implementation, it’s seldom possible to provide a realistic cost estimation upfront. Given that most ERP implementations exceed their budget, starting a journey without having an idea how much the project costs seems to be a prohibitive approach for many customers. Moreover, the negotiations have the character of Change Requests, which can easily become a bottleneck for the project.

On the other hand, agile methodologies involve the customer earlier and the development could start earlier as well. The earlier the customer is involved, the earlier the key users understand how the system works, and thus they can be more efficient in performing their activities, respectively in identifying the gaps in understanding, trapping functional issues early in the process, at least in theory. Some projects address this need by having the key user trained, though the training environment usually has a different setup and data than needed by the customer. Wouldn’t be a good idea to have the key users trained in an environment that reflects to a higher or lower degree the customer’s data and setup requirements?

In theory the setup for such an environment can be done upfront based on one standard configuration frequently met in customer’s industry. With this the functional consultants can start to configure the system together with the key users exploring the data and setup existing in the legacy system(s). This would allow increasing on both sides the depth of understanding and has the potential of speeding up the implementation. This can be started in the early phases, during the time in which the requirements are gathered. Ideally, a basic setup can exist already when the requirements are signed off. It’s true that this approach would mean a higher investment upfront, though the impact could be considerable. Excepting Data Migration and customizations the customer already has a good basis for Go-Live.

Of course, there can be further challenges, though the customer can make thus sure that the financial resources are well spent – having a usable system, respectively a good system understanding outweighs by far the extreme alternative of having high-quality unimplemented FDDs!

Previous <<||>> Next

🧮ERP: Implementations (Part II: It’s a Matter of Complexity)

 

ERP Implementation

There are many factors to blame for implementation process’ inefficiency, however many of the factors can be associated with the complexity of the project itself, respectively of the application(s) involved. The problem of complexity can be addressed by either answering to complexity with complexity, building a complex team to handle the tasks, which is seldom feasible even if many organizations do it, respectively by simplifying the implementation process and/or the application.

In what concerns the project, the complexity starts with requirement’s elicitation, the iterative transformations they suffer until the final functional requirements document is finalized, their evaluation and mapping to features, respectively gap’s identification. It’s a complex task because it involves understanding the business as well the functionality available in the target system(s). Then comes the effort estimation, which, as the name suggests, is just a guess based on available historical numbers and/or experts’ opinion. High-level requirements are easier to manage than low-level requirements, however they allow for more gaps in understanding. The more detailed the specifications, the more they should help in the estimation process, though that’s the theory. A considerable number of factors can impact the process.

Even if there are standard activities in the implementation process, the number of resources involved from the customer as well from the partner(s) side makes the whole planning process a nightmare for any Project Manager, no matter how experienced he/she is.

Ideally, each member of the team should behave like a trooper, knowing by instinct when and what needs to be done, which are the expectations, etc. This might be close to expectation on the partner side as the resources more likely participated in similar projects, though there’s always a mix between levels of expertise, resources migrating between projects. Unfortunately, that’s seldom (never) the case on the customer side as the gap between reality and expectation is considerable.

Each team member requires a minimum of information/knowledge so he/she can perform the activities assigned. Moreover, the volume of coordination and cooperation is considerably higher than in other projects, complexity that increases with organization’s size and is inverse proportional with organization’s maturity in managing projects and implementation-related activities. There’s thus a minimum of initial communication needed, and furthermore communication needs to occur between the parties involved. Moreover, the higher the lack of cohesion between the parties, the higher the need for communication and this applies especially when multiple organizations are involved in the project.

The triple constraint of Project Management between scope, cost, and time, respectively on quality has an important impact on the project. Resources need to be available when the project needs them and, especially on the partner side, only when they are needed. The implementation project to be feasible for the partner, its resources must work on several projects in parallel or the timing must be perfect, that no waiting times are involved, respectively the effort is concentrated only when needed. Such precision is possible maybe at project’s beginning, though the further the project evolves, the more challenging becomes the coordination of resources. Similar considerations apply to the customer as well.

Thus, a more realistic expectation is to have resources available only at certain points in time, and the resources should be capable of juggling between projects, respectively between project and other activities. Prioritizing is a must, and sometimes the operations or other projects have higher priority. When the time is not available, resources need to compromise by reducing the level of quality.

On the other side, it would be great if most of the effort could be concentrated at the beginning of the project, the later interactions being minimal.  

Previous <<||>> Next

🧮ERP: Implementations (Part I: The Process Seems to be Broken)

 

ERP Implementations

Participating in several ERP implementations, one has the expectation that things will change for the better when moving from one implementation to another. Things change positively in certain areas as experience is integrated, though on average the overall performance seems to be the same. Thus, one may wonder, how can this happen? Of course, there are so many explanations - what went wrong, what could have been done better, and the list is usually quite big. However, the history repeats in the next implementation. Something seems to be broken, or maybe this is the way implementations should work, though I doubt this!

An ERP implementation starts with a need and the customer usually has an idea of what the respective need is about. It might even have a set of high-level or even low-level requirements, which should be the case when starting on such a journey. Then the customer selects an implementation partner, event followed by a period of discovery in which the partner learns more about the business including the overall infrastructure, business processes, data and people. Once the requirements are available, the partner can evaluate them to identify the deviations from the standard functionality available and that translate into customizations, sketch solutions, respectively make a first estimate of the costs and resources needed.

Of course, there can be multiple iterations of the process in which the requirements are reviewed, reevaluated, justified, prioritized by all parties and a common understanding, respectively an agreement on the scope and expectations is reached. In the process some requirements are dropped, others are modified or postponed for a later phase or later phases. The whole process can take a few months, though it’s mandatory for creating a workable estimate used as basis for the statement of work and the overall contract.

In parallel the parties can also work on a project plan and agree upon a project methodology, following that once the legal paperwork is signed, resources to be allocated to the project. A common practice is then for the functional consultants to generate based on the requirements a set of documents - functional design documents (FDD), process diagrams - that should be used as basis for the setup, for programming the customizations and User acceptance testing (UAT). Of course, the documents need to be reviewed by the business, gaps or misunderstandings mitigated, and this takes several iterations until the business can sign-off on the respective documents. It’s the point where the setup and programming can start, usually half a year, or even a year or more after the initial steps.

Depending on the scope, in the best-case scenario the setup will take one to two months, at least until having a system ready for UAT with business data as needed for Go-Live. The agreed customizations can translate in further months and effort not only for programming, but also for testing, reviewing and further mitigations. This would be the time when many of the key users see for the first time a working version of the system, which frankly might be too late. Of course, they read and reread the FDDs, though until this point everything was very abstract and no matter how good such documents were written, they can’t replace the hand-on experience with working with the system, discovering the functionality, understanding how it works.

In the best-case scenario, the key-users are satisfied with the results and the UAT, respectively Go-Live can go on as planned, however the expectations for first time right are seldom (never) met. Further iterations and delays are then involved. Overall, the process doesn’t seem to be efficient!

Previous <<||>> Next

22 August 2023

🔖Book Review: Laurent Bossavit's The Leprechauns of Software Engineering (2015)




Software Engineering should be the "establishment and use of sound engineering principles to obtain economically software that is reliable and works on real machines efficiently" [2]. Working for more than 20 years in the field I feel sometimes that its foundation is a strange mix of sound and questionable ideas that take the form of methodologies, principles, standards, myths, folklore, statistics and other similar concepts that form its backbone.

I tend to look with critical eyes at the important numbers advanced in research and pseudo-scientific papers especially when they’re related to my job, this because I know that statistics are seldom what they appear to be - there are accidental and sometimes even intended errors made to support the facts. Unfortunately, the missing row data and often the information about the methodologies used in collecting and processing the respective data make numbers and/or graphics' understanding more challenging, not to mention the considerable amount of effort and time spent to uncover the evidence trail.
Fortunately, there are other professionals who went further down the path of bibliographical references and shared their findings in blogs, papers, books and other media content. It’s also the case of Laurent Bossavit, who in his book, "The Leprechauns of Software Engineering" (2015), looks behind some of the numbers that over time become part of the leprechaunish folklore of IT professionals, puts them into the historical context and provides in appendix the evidence trails for the reader to validate his findings. Over several chapters the author focuses mainly on the cost of defects, Boehm’s cone of uncertainty, the differences in productivity amount individual programmers (aka 10x claim), respectively the relation between poor requirements and defects.

His most important finding is that the references used in most of the researched sources advancing the above numbers were secondary, while the actual sources provide no direct information of empirical data or the methodology for its collection. The way the numbers are advanced and used makes one question the validity of the measurements performed, respectively the character of the mistakes the authors made. Many of the cited papers hardly match the academic requirements of other scientific fields, being a mix of false claims, improperly conducted research and citations.

Secondly, he argues that the small sample sizes used as basis for the experiments, the small population formed usually of students, respectively the way numbers were mixed without any reliable scientific character makes him (and the reader as well) question even more how the experiments were performed in the respective papers. With this, it is more likely that a bigger number of research based on these sources should raise further concerns. The reader can thus ask himself/herself how deep the domino effect goes inside of the Software Engineering field.

In author’s opinion Software Engineering as social process "needs to be studied with tools that borrow as much from the social and cognitive sciences as they do from the mathematical theories of computation". How much is possible to extend the theories and models of the respective fields is an open topic. The bottom line, the field of Software Engineering needs better and scientific empirical experiments that are based on commonly agreed definitions, data collection and processing techniques, respectively higher standards for research publications. Without this, we’ll continue to compare apples with peaches and mix them in calculations so we can get some stories that support our leprechaunish theories.

Overall, the book is a good read for software engineers as well as for other IT professionals. Even if it barely scratched the surface of software myths and folklore, there’s enough material for the readers who want to dive deeper.

Previous Post  <<||>>  Next Post

References:
[1] Laurent Bossavit (2015) "The Leprechauns of Software Engineering"
[2] Friedrich Bauer (1972) "Software Engineering", Information Processing

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.