14 January 2024

💠🛠️🛠️🗒️SQL Server: Query Plan [Notes]

Disclaimer: This is work in progress intended to consolidate information from various sources (some unfortunately not available anymore).

Query plan (aka query execution plan, execution plan)

  • read-only reentrant structure that is shared by multiple users [1]
    • optimized version of the query tree [7]
  • defines
    • the sequence in which the source tables are accessed [7]
    • the methods used to extract data from each table [7]
      • whether it uses an index or a table scan [7]
    • the methods used to compute calculations [7]
    • the methods used to filter, aggregate, and sort data from each table [7]
  • built by the Query Optimizer
    • it chooses a query plan using a set of heuristics to balance compilation time and plan optimality in order to find a good query plan [7]
  • stored in the plan cache
    • remain in the plan cache as long as there is enough memory to store them
      • under memory pressure exists, a cost-based approach is used to determine which execution plans to remove from the plan cache [7]
        • execution plans are examined multiple times until enough have been removed to satisfy memory requirements [7]
          • ⇒ an execution plan may have its cost increased and decreased more than once [7]
        • the database engine uses the resource monitor and user worker threads to free memory from the plan cache in response to memory pressure [7]
          • examine plans run concurrently to decrease the current cost for each unused execution plan [7]
          • the resource monitor removes execution plans from the plan cache when global memory pressure exists [7]
            • it frees memory to enforce policies for system memory, process memory, resource pool memory, and maximum size for all caches [7]
          • the user worker threads remove execution plans from the plan cache when single cache memory pressure exists [7]
            • it enforce policies for maximum single cache size and maximum single cache entries [7]
      • when memory pressure no longer exists, the decreasing of current cost of unused execution plans is stopped [7]
        • all execution plans remain in the plan cache, even if their cost is zero [7]
    • one or all plans can be removed from the plan cache [7]
      • via DBCC FREEPROCCACHE
      • via DBCC FREESYSTEMCACHE can also be used to clear any cache, including plan cache
      • [SQL Server 2016] via ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE
        • clears the procedure (plan) cache for the database in scope [7]
    • changes in some configuration settings via sp_configure and reconfigure cause plans to be removed from plan cache [7]
  • agnostic to both parameter values and user-specific information
    • no user context is stored in the query plan [5]
  • the text of each newly submitted query is hashed and cached in memory, along with the original query text and a link to the execution context and the execution plan that the original query resulted in [2]
    • allows avoid redundantly parsing and optimizing queries [2]
  • must first go through a conversion to be passed to the LPE process to be carried out
    • ⇐ regardless of the stage that actually produces the execution plan
    • this is handled by the conversion stage of the optimization process
    • each optimized query plan is processed by this stage before being returned for execution [3]
  • [SQL Server 2005] is put in the plan cache when a cacheable batch is submitted for execution and compiled [1]
    • a portion from the buffer pool is used for caching query plans (aka plan cache) [3]
  • [SQL Server 2005] there are at most two instances of a query plan at any time in plan cache [1]
    • one for all of the serial executions 
    • one for all of the parallel executions
      • common for all parallel executions, regardless of their degrees of parallelism [5]
      • a necessary condition to compile a parallel query plan is that the minimum of the number of processors must be more than one after having survived:
        • the processor affinity mask
        • the 'max degree of parallelism' server-wide option (possibly set using the "sp_configure" stored procedure)
        • [SQL Server 2008] the MAXDOP limits imposed on the resource pool that the query is executing within, by the Resource Governor
        • any MAXDOP hint on the query
    • if two identical queries posed by the same user using two different sessions with the same session options arrive simultaneously, two query plans exists while they execute [3]
      •  at the end of their executions, the plan for only one of them is retained in the plan cache [3]
  • can be cached (aka query plan caching)
    • query plans are cashed for many types of batches [3]
      • an exact text match for two ad-hoc queries before reuse can occur [3]
        • the text match is both case- and space-sensitive, even on case-insensitive servers [3]
    • if a query plan is not cached, its reuse opportunity is zero [3] 
      • such a plan will be compiled every time it is executed, potentially resulting in poor performance [3]
      • in some cases, non-caching is a desirable outcome [3]
  • can be reused (aka query plan reuse)
    • when the plan cache is searched for plan reuse opportunities, the comparisons are against query plans, not against execution contexts [3]
    • once a reusable query plan is found, an available execution context is found (causing execution context reuse) or freshly generated [3]
    • does not necessarily imply execution context reuse [3]
    • query plans should be reused whenever possible and optimal to avoid unnecessary query compilation costs [3]
    • factors that affect plan-reuse:
      • [stored procedures and triggers] the execution plan is executed separately from the execution plan for the batch calling the stored procedure or firing the trigger [7]
        • allows for greater reuse of their execution plans [7]
        • a query plan for a stored procedure is not reused when executed across different databases
          • doesn’t apply to ad-hoc queries, prepared queries, or dynamic SQL
        • trigger execution
          • the number of rows affected by that execution (1 versus n) is a distinguishing factor in determining a plan cache hit
          • ⇐ specific to triggers, and does not apply to stored procedures.
          • [SQL Server 2005] INSTEAD OF triggers
            • the "1-plan" is shared by executions that affect both 0 and 1 row
          • [SQL Server 2005] non-INSTEAD OF ("after") triggers
            • "1-plan" is only used by executions that affect 1 row and "n-plan" is used by executions that affect both 0 and n rows (n > 1)
          • triggers associated with bulk inserts are cached.
      • bulk insert statements are never cached 
      • a batch that contains any one literal longer than 8 KB is not cached
        • a literal's length is measured after constant folding is applied
      • batches flagged with the "replication flag" associated with a replication user are not matched with batches without that flag.
      • batch called from CLR is not matched with the same batch submitted from outside of CLR
        •  ⇐ applies to CLR triggers and non-CLR triggers
        • two CLR-submitted batches can reuse the same plan. 
      • notification queries and non-notification queries
      • query plans for queries submitted via sp_resyncquery are not cached
        • if the query is resubmitted (via sp_resyncquery or otherwise), it needs to be compiled again
      • batches within a  cursor with non-cursor batches 
      • certain SET options (see [3])
        • {best practice} to avoid SET option-related recompilations, establish SET options at connection time, and ensure that they do not change for the duration of the connection [3]
      • batches with unqualified object names result in non-reuse of query plans. 
      • stored procedure created with "CREATE PROCEDURE ...WITH RECOMPILE
      • stored procedure executed using "EXEC ... WITH RECOMPILE"
  • trace flag 8675
    • determines when the optimizer times out while generating a query plan [2]
    • tells when the optimizer reaches the memory limit to which SQL Server restricts it [2]
      • ⇐ about 80% of the BPool
  • {concept} trivial plan 
    • results when the query optimizer determines that given the tables referenced in the query and the indexes existing on them, only one plan is possible [3]
      • usually a single table is referenced
      • doesn't require any cost estimation or comparisons between plans [2]
    • in XML plan will appear statementOptmLevel="TRIVIAL"
    • a query that has generated a trivial plan may not always generate a trivial plan
      • e.g. new indexes might be created on the underlying tables, and so multiple access paths become available to the query optimizer
    • may be generated for certain types of covered queries, joins between tables with constraint relationships, DML statements [2]
    • a query that contains a subquery is not eligible for a trivial plan [2]
    • using a variable in the query doesn't impair the optimizer's ability to select a trivial plan [12]
      • regardless of variable's value at runtime, a simple equality operator is being used to compare the variable with a uniquely indexed column, and the optimizer intrinsically knows that a trivial plan is the best option [2]
    • 8759 trace flag
      • allows detecting when a trivial plan has been chosen [2]
      • {enabled} the optimizer writes the first portion of a query for which a trivial plan is created to the error log [2]
  • {concept} execution context
    • data structure that holds parameter values and user-specific information specific to the executed query
    • derived from a query plan 'on the fly' [3]
      • before a batch execution begins, a skeleton execution context is generated
        • as execution proceeds, the necessary execution context pieces are generated and put into the skeleton
          • ⇒ two execution contexts need not be identical even after user-specific information and query parameters are deleted from them
        • because structures of execution contexts derived from the same query plan can differ from one another, the execution context used for a particular execution has slight impact on performance [3]
          • the impact of the differences diminishes over time as the plan cache gets 'hot' and as a steady state is reached [3]
      • holds the values that are needed for a specific execution of a query plan [3]
      • multiple execution contexts may exist for a query plan
      • ⇒ it cannot exist in the plan cache without an associated query plan [3]
        • ⇒ whenever a query plan is removed from plan cache, all of the associated execution contexts are also removed along with it [3]
    • cached (aka execution context caching)
    • can be reused (aka execution context reuse)
    • each user concurrently executing a batch will have an execution context 
    • is not reentrant 
      • ⇐ at any point of time, an execution context can execute only one batch submitted by a session
        • ⇐ while the execution is happening, the context is not given to any other session or user [3]
    • if a batch execution generates an error of severity 11 or higher, the execution context is destroyed [3]
      • ⇒ even in the absence of memory pressure the number of execution contexts (for a given query plan) cached in plan cache can go up and down [3]
    • if a parallel query plan is compiled, a serial execution context may be derived from the parallel plan at execution time by disabling parallelism-specific plan operators [3]
    • execution contexts that are associated with parallel branches of the parallel plan are not cached, however serial branches of the parallel plan are cached [3]
  • {concept} plan cost
    • associated with query plans and execution contexts
    • partially controls how long the plan or context will live in the plan cache
    • the cost of an ad-hoc query is zero
    • [SQL Server 2000] for a query plan, the cost is a measure of the server resources (CPU time and I/O) it takes the query optimizer to optimize the batch
      • for ad-hoc queries, the cost is zero
    • [SQL Server 2000] for an execution context, the cost is a measure of the server resources (CPU time and I/O) it takes for the server to initialize the execution context so that the individual statements become ready for execution
      • doesn't include costs (CPU and I/O) incurred during batch executions
      • typically, execution context costs are lower than query plan costs 
    • [SQL Server 2005] the cost of a query plan is a measure of the amount of resources required to produce it
      • the cost is calculated in 'number of ticks' with a maximum value of 31
        • the individual parts of the cost are calculated as follows [3]
          • two I/Os cost 1 tick, with a maximum of 19 ticks
          • two context switches cost 1 tick, with a maximum of 8 ticks
          • sixteen memory pages (128 KB) cost 1 tick, with a maximum of 4 ticks
        • as soon as the size of the plan cache reaches 50% of the buffer pool size, the next plan cache access decrements the ticks of all of the plans by 1 each [3]
          • because this decrement is piggybacked on a thread that accesses the plan cache for plan lookup purpose, the decrement can be considered to occur in a lazy fashion [3]
        • if the sum of the sizes of all of the caches reaches or exceeds 75% of the buffer pool size, a dedicated resource monitor thread gets activated, and it decrements tick counts of all of the objects in all of the caches
          • this thread's behavior approximates that of the lazy-writer thread in SQL Server 2000
      • cost = I/O cost + context switch cost (a measure of CPU cost) + memory cost
      • the plan cache is distinct from the data cache
      • there are other functionality-specific caches
      • the lazy-writer process does not decrement costs 
    •  query plan reuse causes the query plan cost to be reset to its initial value [3]
  • {concept} query optimization
    • the multi-phased process of selecting a 'good-enough' query execution plan from potentially many possible plans  [7]
    • involves the overhead of selecting the query and selecting a plan
      • the overhead is typically saved several-fold when the query optimizer picks an efficient execution plan [7]
  • {concept} [SQL Server 2022] optimized plan forcing 
    • part of intelligent query processing family of features
    • reduces compilation overhead for repeating forced queries [7]
      • requires the Query Store to be enabled and in 'read write' mode
      • once the query execution plan is generated, specific compilation steps are stored for reuse as an optimization replay script [7]
        • stored as part of the compressed showplan XML in Query Store, in a hidden OptimizationReplay attribute [7]
  • {concept} estimated execution plan
    • the compiled plan, as produced by the query optimizer [7]
  • {concept} actual execution plan
    • the compiled plan plus its execution context [7]
    • includes runtime information available after the execution completes [7]
      • e.g. execution warnings, or in newer versions of the Database Engine, the elapsed and CPU time used during execution [7]
  • {concept} Live Query Statistics
    • is the same as the compiled plan plus its execution context
    • includes runtime information during execution progress [7]
      • e.g. the actual number of rows flowing through the operators
    • updated every second. Runtime information includes for example [7]
  • {concept} query plan hash
    • a binary hash value calculated on the execution plan for a given query [7]
    • used to uniquely identify similar execution plans [7]
  • {concept} query hash
    • a binary hash value calculated on the Transact-SQL text of a query [7]
    • used to uniquely identify queries [7]
  • {concept} plan cache 
    • part of the memory pool that is used to store execution plans [7]
    • has two stores for all compiled plans
      • Object Plans cache store (OBJCP) 
        • used for plans related to persisted objects (stored procedures, functions, and triggers) [7]
      • SQL Plans cache store (SQLCP) 
        • used for plans related to auto-parameterized, dynamic, or prepared queries [7]
    • has two additional stores that don't store compiled plans
      • Bound Trees cache store (PHDR) 
        • used for data structures (aka bound trees, algebraizer trees) used during plan compilation for views, constraints, and defaults [7]
      • Extended Stored Procedures cache store (XPROC) 
        • used for predefined system procedures that are defined using a DLL, not using Transact-SQL statements [7]
          • e.g. sp_executeSql or xp_cmdshell
            • the cached structure contains only the function name and the DLL name in which the procedure is implemented [7]
  • {concept} plan handle
    • transient identifier that allows to retrieve a compiled plan from the plan cache [7] 
      • remains constant only while the plan remains in the cache [7]
    • hash value derived from the compiled plan of the entire batch [7] 
    • remains the same even if one or more statements in the batch get recompiled [7]
    • if a plan was compiled for a batch instead of a single statement, the plan for individual statements in the batch can be retrieved using the plan handle and statement offsets [7]
  • {concept} recompile execution plans
    • certain changes in a database can cause an execution plan to be either inefficient or invalid, based on the new state of the database [7]
      • the changes that invalidate an execution plan are detected and the plan are marked as not valid [7]
        •  ⇒ a new plan must then be recompiled for the next connection that executes the query [7]
      • conditions that invalidate a plan include the following:
        • changes made to a table or view referenced by the query [7]
          • via ALTER TABLE or ALTER VIEW
        • changes made to a single procedure, which would drop all plans for that procedure from the cache [7]
          • via ALTER PROCEDURE
        • changes to any indexes used by the execution plan [7]
        • updates on statistics used by the execution plan, generated either explicitly from a statement or generated automatically.
        • dropping an index used by the execution plan.
        • an explicit call to sp_recompile.
        • large numbers of changes to keys 
          • generated by INSERT or DELETE statements from other users that modify a table referenced by the query [7]
        • for tables with triggers, if the number of rows in the inserted or deleted tables grows significantly [7]
        • executing a stored procedure using the WITH RECOMPILE option [7]
    • most recompilations are required either for statement correctness or to obtain potentially faster query execution plans [7]


Acronyms
MAXDOP - Maximum Degree of Parallelism

Resources:
[1] Scalability Experts, "Microsoft® SQL Server 2005: Changing the Paradigm", 2005
[2] Ken Henderson (2004) The Guru's Guide to SQL Server Architecture and Internals 55/12
[3] Greg Low, Plan Caching and Recompilation in SQL Server 2012, 2013 [3]
[4] MCITP Self-Paced Training Kit Exam 70-442
[5] Microsoft Learn (2012) Execution Plan Caching and Reuse (link)
[6] Microsoft Learn (2023) Execution plan overview (link)
[7] Microsoft Learn (2023) Optimized plan forcing with Query Store (link)
[8] Microsoft Learn (2023) Query processing architecture guide (link)


10 January 2024

📦Data Migrations (DM): Conceptualization (Part VIII: Create a Data Migration Strategy for Dynamics 365)

Data Migration
Data Migration Series

Microsoft made available a module on creating a Data Migration strategy (DMS) for Dynamics 365 solutions [1] and provides a template for the DMS workshop on GitHub. Despite the all-encompassing title, the module focuses mainly on the latter. Here are several points to consider in addition.

The workshop allows to gather most important aspects related to the DM, however it's useful to make the strategy explicit as part of a document and share it with the team. The DMS document should follow the general elaboration of a strategy by providing information about the current and targeted state, goals and objectives, assumptions, dependencies, risks and opportunities, the architecture (including alternatives that need to be explored), direct and interfacing processes, roles involved and their responsibilities, communication paths (including escalation), etc. 

Even if the template discusses general aspects, the treatment of best practices is minimal (see some technical and architectural aspects).

The DMS should describe when possible the high-level architecture planned for DM. The architecture section can be updated as the project progresses. 

One can plan for 3-4 dry-runs plus the UAT, though further iterations might be needed depending on the data quality, respectively the complexity of the logic needed. Each dry-run will have its own scope (in terms of entities and number of records) and objectives (in terms of passed tests), while the UAT attempts to mimic the same conditions the as DM for Go-Live.

Several iterations are usually needed to identify the entities in scope. Moreover, the scope will suffer several changes as new entities and migration areas are identified. There are high the chances that some data entities will be forgotten or postponed after Go Live. Upon case, the business might approach a phased implementation, each with their own DM,

It's useful to have a list of the entities available in the organization and map them to the entities available in the target system(s). This allows to identify the gaps on the DM side. There will be entities for which data need to be provided, entities for which there's no counterpart in the target system, respectively entities with different levels of detail.

An important step of the process is defining the business rules that apply - what subset of data is migrated (e.g. active referenced master data, open transactions, etc.). The definitions will suffer further changes as business perception about what's needed changes.

As they'll go through multiple changes, it's useful to have such lists in Excel or similar repositories and reference them in the strategy and further documents. 

The strategy can also target the post-Go-Live migration activities, though the process and architecture might be slightly different, even if many of the requirements still apply. One can devise the further aspects as part of the DMS or consider them as part of the Data Management strategy and operations.

 Despite the considerable overlapping of the DM with Data Management, at least until now the DM is not considered as part of the latter. Big organizations with periodic DMs might need to consider a broader strategy, following to address punctually the deviations from each project. 

The DM is usually considered as a subproject of the implementation and should be managed as such. Depending on its scope and characteristics (principles, acceptance criteria) the DM might use a different methodology and timeline than the main project. Defining project's goals/objectives, outcomes, stakeholders, timeline, phases, and milestones, respectively the high-level assumptions, dependencies, risks, opportunities and constraints is Project Management 101.

The DM is critical for an implementation's success. It would be useful to describe the architectural alternatives, as well to provide a series of architectural recommendations.

Previous Post <<||>> Next Post

Resources:
[1] Microsoft learn (2023) Create a data migration strategy for Dynamics 365 solutions (link)

08 January 2024

🗒️Dynamics 365: Fixed Assets [Note]

Disclaimer: This is work in progress intended to consolidate information from various sources.

Fixed Assets (aka FA)

  • assets that a company owns and uses in the daily operations of the company and are not intended for resale to customers [1]
    • ⇐ the same asset might be classified as a FA in one company and as a current asset or inventory item in another [1]
  • represent permanent value and not just expenditures in the year of acquisition [1]
  • typically depreciated, or expensed, over their useful life
    • other adjustments might also be necessary
    • their useful lives span multiple years [1]
  • treated as balance sheet transactions in the year that they are acquired, and are posted as an asset to a balance sheet account [1]
  • {action} review asset value in the balance sheet (net book value) 
    • should be reviewed at least one time each year  [1]
      • it can be reviewed monthly, quarterly, semi-annually, or annually [1]
    • an adjustment of the asset value in the balance sheet (write-down or write-up) might be necessary  [1]
      • the adjustments are usually caused by some extraordinary occurrences in the market that affect the price if the company were to reacquire the asset. [1]
      • ⇐ accounting principles in some countries or regions prohibit the write-up of assets [1]
  • {action} remove asset from the accounting books
    • when a company no longer has use for the asset [1]
      • ⇐ because it is either being sold or scrapped [1]
    • the original acquisition price and accumulated depreciation of the asset are reversed, and any surplus or loss from the disposal is posted to the profit and loss statement [1]
  • {action} dispose asset
    • the fixed asset transactions that were previously posted are reversed out of the original accounts [1]
    • the net amounts are then moved to the appropriate account for gain and loss for asset disposal [1]
  • Fixed asset groups 
    • allow specifying default attributes for every FA that is assigned to a group [1]
      • include the service life and the depreciation convention.
      • FAs are assigned to a group when they are created [1]
    • the only required field when creating a FA [1]
  • Books 
    • track the financial value of a fixed asset over time by using the depreciation configuration that is defined in the depreciation profile
      • ⇐ each book tracks an independent financial lifecycle of an asset [1]
    • assigned to fixed asset groups
      • {default} the books assigned to the FA group are then assigned to the FA [1]
    •  books that are configured to post to the GL are associated with a posting profile
      • GL accounts are defined for each book in the posting profile and are used when fixed asset transactions are posted [1]
      • ⇐ the configuration is the default setting because it's typically used for corporate financial reporting [1]
    • books that don't post to the GL post only to the Fixed asset subledger and are typically used for tax reporting purposes [1]
      • this approach allows the flexibility to delete historical transactions for the asset book because they haven't been committed to GL [1]
      • {field} Post to general ledger
        • {No}disables posting to the general ledger for the book
          • sets the Posting layer field automatically to None
    • each book that is attached to a FA is set up for a particular posting layer that has an overall depreciation objective [1] 
    • derived books
      • setup in addition to the standard books [1]
      • simplify the posting of FA book transactions that are planned for regular intervals
        • one book is chosen as the primary book, which is usually the one used for accounting depreciation [1]
          • then attach to it other books that are set up to post transactions in the same intervals as the primary book [1]
            • ⇐ tax depreciation books are often set up as derived books [1]
        • allow posting transactions to different posting layers at the same time [1]
        • acquisitions, acquisition adjustments and disposals can be set up with it
          • use acquisition when the book and the derived book should be applied to the FA from the time of the fixed asset acquisition [1]
          • if the primary book and the derived books have the same intervals regarding sale or disposal, all FA transaction types are available for the setup of a derived book [1]
          • depreciation posted in the derived book will be the same amount as was posted for the primary book [1]
            • if the depreciation methods are different between the books, one should not generate depreciation transactions by using the derived process [1]
              • ⇐ the books can then be attached to specific fixed assets [1]
      • the specified derived transactions are posted against the derived books as an exact copy of the primary transaction [1]
        • ⇒ derived transactions are typically set up for acquisitions and disposals, not for depreciation transactions [1]
      • when posting transactions for a book that contains derived books, the derived book transactions are posted automatically in journals, purchase orders, or free text invoices [1]
      • when preparing the primary book transactions in the FA journal, one can view and modify the amounts of the derived transactions before posting them [1]
      • certain accounts are updated only once by postings of the primary book [1]
        • e.g. sales tax and customer or vendor accounts
      • derived book transactions are posted to the accounts that have been defined for the derived book in the FA posting profiles page [1]
      • {exception} books that are set up to post transactions at intervals other than the primary book intervals must be attached to the FA as separate books and not as derived books [1]
  • depreciation
    • an entry that expenses the part of the asset's original purchase price that was used during the year [1]
    • FAs are depreciated in different ways for different purposes
      • depreciation for tax purposes 
        • calculated by using current tax rules to achieve the highest possible depreciation before taxes [1]
      • depreciation for reporting purposes
        • calculated according to accounting laws and standards [1]
      • the various kinds of depreciation are calculated and recorded separately in the posting layers [1]
  • depreciation methods
    • straight line
      • computed by taking the costs of the acquisition and dividing those costs by the expected service life of the asset [1]
        • the rules are usually defined in the local legislation [1]
  • depreciation profiles 
    • allow configuring via rules how the value of an asset is depreciated over time
      • it's needed to define the method of depreciation, the depreciation year (calendar year or fiscal year), and the frequency of depreciation [1]
    • a primary depreciation profile is assigned to every book
      • ⇐ books also have an alternative (aka switchover depreciation) profile, if this type of profile is applicable [1]
        • the depreciation proposal will switch to this profile when the alternative profile calculates a depreciation amount that is equal to or greater than the default depreciation profile [1]
      • ⇐ enable the Calculate depreciation option to automatically include the FA book in depreciation runs [1]
    • usually refers to tangible assets, but intangible assets can also use depreciation profiles [1]
    • financial assets are considered non-depreciable [1]
      •  e.g. shares of stocks or bonds 
    • extraordinary depreciation profile
      • used for additional depreciation of an asset in unusual circumstances [1]
        • e.g. record depreciation that results from a natural disaster
    • depreciation calculation methods:
      • Straight line service life
      • Reducing balance
      • Manual
      • Factor
      • Consumption
      • Straight line life remaining
      • 200% reducing balance
      • 175% reducing balance
      • 150% reducing balance
      • 125% reducing balance
      • ⇐ most companies use one or more of the straight-line methods, one or more of the reducing balance methods, or the manual method [1]
      • ⇐ the rationale for all methods is to allocate or accrue the depreciable value of the asset into accounting periods [1]
        • the depreciable value of the asset is the acquisition price reduced by a scrap value, if any
      • ⇐ many depreciation profiles with the same depreciation method can be created for different calculations [1]
  • posting profile 
    • must be defined for each book that will post to GL, but it can also be defined at a more detailed level [1]
      • e.g. can be defined for the combination of a book and a FA group, or even for an individual FA book [1]
      • Set up a posting profile record general ledger
    • for each posting profile, one must select the relevant transaction type and book, and then designate the ledger accounts [1]
  • {option} Create depreciation adjustments with basis adjustments 
    • {selected} depreciation adjustments will be automatically created when the value of the asset is updated [1]
    • {not selected} the updated asset value will only affect depreciation calculations going forward [1]
    • {not enabled} for an asset, the depreciation proposal skips the asset [1]
  • {optional} define special depreciation allowances, or bonus depreciation, for a specific combination of a fixed asset group and a book [1]
    • assign a priority to the special depreciation allowance to specify the order in which allowances are calculated when multiple allowances are assigned to a book [1]
  • Journal names 
    • create the journal names that should be used with the FA journal.
      • set the Journal type field to 'Post fixed assets'
      • set the Voucher series field so that the journal names are used for the FA journal [1]
      • FA journals should not use the One voucher number only setting
        • ⇐ because a unique voucher number is required for several automated processes, such as transfers and splits [1]
    • each journal used for depreciation is defined by its journal name for only one posting layer [1]
      • the posting layer in the journal can't be changed [1]
        • the restriction helps guarantee that transactions for each posting layer are kept separate [1]
      • at least one journal name must be created for each posting layer [1]
        • for books that don't post to GL create a journal where the posting layer is set to None [1]
  • {parameter} Capitalization threshold
    • determines the assets that are depreciated
    • if a purchase line is selected as a fixed asset, but it doesn't meet the specified capitalization threshold, a fixed asset is still created or updated, but the Calculate prorated depreciation option is set to No. Therefore, the asset won't be automatically depreciated as part of the depreciation proposals.
  • {option} Automatically create depreciation adjustment amounts with disposal
    • {Yes}, the asset depreciation is automatically adjusted, based on the depreciation settings at the time of asset disposal [1]
  • [Purchase orders] 
    • {option} Allow asset acquisition from Purchasing
      • {Yes} asset acquisition occurs when the invoice is posted [1]
      • {No} the FA can be put on a PO and invoice, but the acquisition won't be posted [1]
        • the posting must be done as a separate step, from the FA journal
    • {option} Create asset during product receipt or invoice posting 
      • {enabled} a new asset is created while posting
        • ⇒ the asset doesn't have to be set up as a fixed asset before the transaction
  • [Purchase requisitions] {option} Check for fixed assets creation during line entry
  • Fixed asset number sequence 
    • can be overridden by the FA group number sequence if it has been specified [1]
  • depreciation conventions for groups of assets that are part of the same book
    • {action} updating the depreciation convention
      • delete all depreciation transactions that exist for those assets [1]
      • delete all transactions for depreciation adjustments, transactions for bonus depreciation, and transactions for extraordinary depreciation for those assets [1]
    • {action} update the depreciation convention for assets that have already been disposed
      • delete first the existing disposal transactions [1]
      • delete all transactions that were generated because of the disposal process [1]
    • {action} after updating the depreciation convention for assets
      • process depreciation and extraordinary depreciation for each asset [1]
      • make manual depreciation adjustments, if any adjustments are required [1]
  • [General Ledger] main accounts 
    • {default} the ledger accounts that are defined are used for your FA transactions [1]
    • {recommendation} define the ledger accounts that are used during the disposal processes, both disposal sales and disposal scraps [1]
    • {recommendation} to help guarantee that transactions are correctly reversed, set up accounts for each type of transaction that you use in your business [1]
      • the main account should be the original account set on the posting profile for the transaction type, and the offset account should the gain and loss for disposal account [1]
      • {exception} for the net book value, the main account and the offset account should be set to the gain and loss for disposal account [1]
      • various combinations of main accounts can be created depending on the level of detail needed in GL [1]
    • can be based on transaction types, books, and other main accounts [1]
    • the value of all FAs is typically summarized in multiple main accounts that are required for financial reporting [1]
    • each time a transaction is posted for a FA, the appropriate main accounts are updated
      • ⇐ the main accounts always show the updated value [1]
    • the main accounts that are used for posting are determined by the options for ledger integration that are specified for the item model group [1]
      • the main accounts that are used vary, depending on whether an asset is assigned to the purchase order line [1]
      • the accounts are derived from the posting profile for each item group [1]
  • [Inventory management]
    • inventory items can be transferred to FAs either as an acquisition or as part of an acquisition [1]
      • an inventory item becomes a FA acquisition through 
        • inventory journal
          • one can enter the acquisition of FA that the LE has produced or constructed for itself [1]
        • PO
          • one PO line creates one fixed asset, regardless of the quantity [1]
          • items can be set up for both resale and internal use [1]
            • using specific receipt and issue accounts on item groups for FAs, one can use the same inventory item both for internal purchases and as stock for resale [1]
        • acquisition proposal
        • ⇐ with it a FA book acquisition transaction is created [1]
          • if a book acquisition includes a derived book, the derived book acquisition transaction is also created [1]
    • the effect the acquisition of fixed assets has on inventory depends on LE's setup [1]
      • posting rules control the decrease in inventory when an acquisition is posted [1]
        •  the inventory isn't always decreased when posting invoices related to FA [1]
    • FAs for internal use must have an account type of 
      • 'Fixed asset receipt'
        • used to track FA's receipt
        • when posting a Vendor invoice, use the FA receipt account if any of conditions hold:
          • the invoice line contains an existing FA for internal purposes [1]
          • the New fixed asset? check box is selected for the product receipt line that is posted [1]
          • the 'Create a new fixed asset' check box is selected for the vendor invoice line [1]
        • typically, this account is an expense account
      • 'Fixed asset issue' 
        • used to track the issuing of the FA to the recipient 1[]
        • when an asset is acquired by using a PO, the FA issue account offsets the FA debit account [1]
        • the asset acquisition can be posted either when posting the Vendor invoice or when posting the asset acquisition in the FA journal, possibly by using an acquisition proposal [1]
      • ⇐ setup for either an item group or an individual item by using the Purchase order tab on the Item group or Posting page [1]
    • if an inventory reservation exists when product receipts are posted, one can’t assign or create a FA from the line [1]
    • the accounts that FA transactions are posted to depend on two factors:
      • whether the assets are purchased or constructed by the LE [1]
      • the transaction type of the asset
        • connects the inventory transaction to the posting profile in FAs [1]
        • because the posting profile in FAs defines which accounts are updated, the selection of a transaction type for a FA is also, indirectly, the selection of the main accounts that the transaction is posted to [1]
        • for both constructed and purchased fixed assets, the transaction type is typically Acquisition or Acquisition adjustment [1]
  • [Accounts receivable]
    • the integration uses posting profiles that are set up in FAs
      • activated when a FA, book, and FA transaction type is selected for a Customer invoice before the customer invoice is posted [1]
      • selling a FA must be done via Free text invoices
        • ⇐ because FAs aren’t part of Inventory management [1]
  • [Accounts payable]
    • typically, FAs are acquired from external vendors
  • asset acquisitions are posted either
    • when posting the Vendor invoices
      • FA accounts are updated whenever a Vendor invoice for a FA acquisition is posted [1]
      • the transaction is posted according to the posting profiles that are set up in FAs for the various FA transaction types [1]
    • from FAs
  • [Project management and accounting]
    • a project can be associated with a FA asset that is affected by the project
      • each phase, task, or subproject can be associated to a different asset [1]
      • one FA can be associated with each project record [1]
      • the association is created via the Fixed asset number field on the Projects page [1]
        • typically, applies to projects related to work, maintenance, or improvements for the FA [1]
      • the project type must be either Internal or Cost project
      • clear the Fixed asset number field on the Projects page to delete the association [1]
    • when the project is completed, a write-up adjustment for the asset isn’t created automatically [1]
      • ⇒ must be created manually [1] 
    • one can designate a FA created or manufactured as part of an estimate project [1] 
      • at the end of an estimate project, one can automatically post a FA acquisition transaction [1]

Acronyms:
FA - Fixed Asset
GL - General Ledger
LE - Legal Entity
PO - Purchase Order

Resources:
[1] Microsoft Learn (2023) Configure fixed assets management in Dynamics 365 Finance [link]

02 January 2024

🕸Systems Engineering: Never-Ending Stories in Praxis (Quote of the Day)

Systems Engineering
Systems Engineering Cycle

"[…] the longer one works on […] a project without actually concluding it, the more remote the expected completion date becomes. Is this really such a perplexing paradox? No, on the contrary: human experience, all-too-familiar human experience, suggests that in fact many tasks suffer from similar runaway completion times. In short, such jobs either get done soon or they never get done. It is surprising, though, that this common conundrum can be modeled so simply by a self-similar power law." (Manfred Schroeder, "Fractals, Chaos, Power Laws Minutes from an Infinite Paradise", 1990)

I found the above quote while browsing through Manfred Schroeder's book on fractals, chaos and power laws, book that also explores similar topics like percolation, recursion, randomness, self-similarity, determinism, etc. Unfortunately, when one goes beyond the introductory notes of each chapter, the subjects require more advanced knowledge of Mathematics, respectively further analysis and exploration of the models behind. Despite this, the book is still an interesting read with ideas to ponder upon.

I found myself a few times in the situation described above - working on a task that didn't seem to end, despite investing more effort, respectively approaching the solution from different angles. The reasons residing behind such situations were multiple, found typically beyond my direct area of influence and/or decision. In a systemic setup, there are parts of a system that find themselves in opposition, different forces pulling in distinct directions. It can be the case of interests, goals, expectations or solutions which compete or make subject to politics. 

For example, in Data Analytics or Data Science there are high chances that no progress can be made beyond a certain point without addressing first the quality of data or design/architectural issues. The integrations between applications, data migrations and other solutions which heavily rely on data are sensitive to data quality and architecture's reliability. As long the source of variability (data, data generators) is not stabilized, providing a stable solution has low chances of success, no matter how much effort is invested, respectively how performant the tools are. 

Some of the issues can be solved by allocating resources to handle their implications. Unfortunately, some organizations attempt to solve such issues by allocating the resources in the wrong areas or by addressing the symptoms instead of taking a step back and looking systemically at the problem, analyzing and modeling it accordingly. Moreover, there are organizations which refuse to recognize they have a problem at all! In the blame game, it's much easier to shift the responsibility on somebody else's shoulders. 

Defining the right problem to solve might prove more challenging than expected and usually this requires several iterations in which the knowledge obtained in the process is incorporated gradually. Other times, one attempts to solve the correct problem by using the wrong methodology, architecture and/or skillset. The difference between right and wrong depends on the context, and even between similar problems and factors the context can make a considerable difference.

The above quote can be corroborated with situations in which perfection is demanded. In IT and management setups, excellence is often confounded with perfection, the latter being impossible to achieve, though many managers take it as the norm. There's a critical point above which the effort invested outweighs solution's plausibility by an exponential factor.  

Another source for unending effort is when requirements change frequently in a swift manner - e.g. the rate with which changes occur outweighs the progress made for finding a solution. Unless the requirements are stabilized, the effort spirals towards the outside (in an exponential manner). 

Finally, there are cases with extreme character, in which for example the complexity of the task outweighs the skillset and/or the number of resources available. Moreover, there are problems which accept plausible solutions, though there are also problems (especially systemic ones) which don't have stable or plausible solutions. 

Behind most of such cases lie factors that tend to have chaotic behavior that occurs especially when the environments are far from favorable. The models used to depict such relations are nonlinear, sometimes expressed as power laws - one quantity varying as a power of another, with the variation increasing with each generation. 

Previous Post <<||>> Next Post

Resources:
[1] Manfred Schroeder, "Fractals, Chaos, Power Laws Minutes from an Infinite Paradise", 1990 (quotes)

31 December 2023

💠🛠️🛠️🗒️SQL Server: Buffer Pool [Notes]

Disclaimer: This is work in progress intended to consolidate information from various sources (some unfortunately not available anymore).

Buffer Pool  [aka BP, BPool]

  • main memory component in SQL Server [2]
    • ⇐ preferred memory allocator for the whole server [1]
    • ⇐ the biggest consumer of SQL Server 
  • area of memory that SQL Server used to store
      • cached data (aka data pages)
      • cached execution plans 
      • algebrizer trees for views, constraints and defaults
      • lock memory
      • ⇐ anything with a non-zero value in sys.dm_os_memory_clerks.single_pages_kb
    • ⇐ caching data in the buffer pool reduces the load on the I/O subsystem and improves performance [6]
    • ⇐ all pages must be copied into the buffer pool before they can be used in a query
      •  ⇒ it is needed to scan the buffer pool for a number of operations [3]
    • single-page allocator
      • it commits and decommits memory blocks of 8KB granularity only
    • manages disk I/O functions for bringing data and index pages into the data cache so data can be shared among users [2]
    • all memory not used by another memory component remains in the buffer pool to be used as a data cache for pages read in from the database files on disk [2]
      • when other components require memory, they can request a buffer from the buffer pool [2]
        • most of the buffers taken from the buffer pool for other memory components go to other kinds of memory caches [2]
      • ⇐ they can only get blocks of 8KB in size [1]
        • these blocks are not continues in memory[1]
        • allocation for large buffers will be satisfied by memory node's multi-page allocator or by virtual allocator [1]
          • ⇒ memory will be allocated outside of Buffer Pool [1]
        • BP can be used as underneath memory manager for SQL Server components as long as they allocate buffers of 8KB [1]
          • pages allocated from BP are referred as stolen[1]
    • first decides how much of VAS it needs to reserve for its usage (aka target memory)
      • decision based on internal memory requirements and external memory state
      • it calculates its target amount of memory it thinks it should commit before it can get into memory pressure [1]
        • ⇐ to keep the system out of paging the target memory is constantly recalculated [1]
        • {restriction} the target memory can't exceed max memory that represents max server memory settings [1]
        • even if the min server memory equals the max server memory BP commits its memory on demand [1]
          • ⇐ monitor the corresponding profiler event to observe this behavior [1]
    • reserves all of it right a way
        • ⇐ check monitor SQL Server's virtual bytes from perfmon or use vasummary view to observe this behavior 
        • ⇐ normally can't get all the needed memory in one region
          • ⇒ several large regions reserved ← by design behavior 
    • commits pages on demand
  • during startup the memory manager configures BP to be SQLOS's single page allocator
    • from that point on all dynamic single page allocations are provided by BP[1]
  • has its own memory clerk
    • is leveraged to respond to external and VAS memory pressure only [47]
  • leverages only Virtual and AWE SQLOS's interfaces
    • never uses any type of page allocator from SQLOS [1]
  • {operation} dropping buffers 
    • likely results in some degree of performance degradation
      • ⇐ any subsequent query executions will have to reread the data from the database files increasing I/O
    • via: DBCC DROPCLEANBUFFERS
  • {concept} page
    • the fundamental unit of data storage in SQL Server
  • {concept} buffer
    • page in memory that's the same size as a data or index page [2]
      • ⇐ page frame that can hold one page from a database [2]
  • {concept} buffer pool scan
    • common internal operation that iterates through the entire buffer descriptor array to find any buffers that belong to a specific database [3]
    • [<SQL Server 2022] serial operation
      • on large memory machines operations that require scanning the buffer pool can be slow [3]
        • ⇐ the larger the machine, the greater the impact 
        • ⇐ it doesn’t necessarily matter about the size of the operation [3]
    • [SQL Server 2022] {feature}Buffer Pool Parallel Scan
      • parallelized by utilizing multiple cores
        • adds processing power to scan the buffer pool more efficiently [3]
          • ⇐ benefits both small and large database operations on larger memory machines
            • 10-30x improvement in executions
            • customers running mission-critical OLTP, hosted service providers, and data warehouse environments will witness the most improvements in overall processing speed [3]
          • uses one task per 8 million buffers (64 GB) 
            • ⇐ a serial scan will still be used if there are less than 8 million buffers [3]
        • uses buffer pool scan diagnostics to improve supportability and insights with new buffer pool scan events [3]
      • can significantly improve the performance of database workloads
        • operations benefiting from it
          • database startup/shutdown
          • creating a new database
          • file drop operations
          • backup/restore operations
          • Always On failover events
          • DBCC CHECKDB and DBCC Check Table
          • log restore operations
          • other internal operations (e.g., checkpoint) 
References:
[1] Slava Oaks (2005) SQLOS's memory manager and SQL Server's Buffer Pool (link) [something similar seems to be available here]
[2] Kalen Delaney (2006) Inside Microsoft® SQL Server™ 2005: The Storage Engine
[3] Microsoft SQL Server Blog (2022) Improve scalability with Buffer Pool Parallel Scan in SQL Server 2022, by David Pless (link)
[4] Microsoft learn (2022) Operations that trigger a buffer pool scan may run slowly on large-memory computers (link)
[5] Microsoft Docs (2023) DBCC TRACEON - Trace Flags (link)
[6] Dmitri Korotkevitch (2015) Expert SQL Server: In-Memory OLTP
[7] Slava Oaks (2005) SQLOS's memory manager: responding to memory pressure (link)

30 December 2023

💫ERP Systems: Microsoft Dynamics 365's Invoice Capture (Features)

Disclaimer: This is work in progress intended to consolidate information from the various sources (what I considered as important during my exploration of the documentation and further communication) and not to provide an overview of all the features. Please refer to the documentation for a complete overview!

In what concerns the releases see [2].
Last updated: 23-May-2024

Invoice Capture - Main Components
Invoice Capture - Main Components [3]

AI Model

  • {feature} prebuilt model (aka Invoice processing model) [by design]
    • can handle the most common invoices in various languages
    • owned by Microsoft and cannot be trained by the customers
  • {feature} custom prebuilt models [planned]
    • built on top of the prebuilt model to handle more complex invoice layouts
    • only requires to train the exceptional invoices
    • after a model is published, additional mapping is required to map the model fields to the invoice files
  • {feature} custom model [by design]
    • requires training the model from scratch 
      • costs time and effort
    • supports the Charges and Sales Tax Amount fields [1.5.0.2]
    • support lookup lists on custom fields [1.6.0.x]

Channels

  • flows that collect the invoices into one location 
    • there's always a 1:1 relationship between flows and channels
  • multiple channels can be defined using different triggers based on Power Automated connectors
  • {feature}default channel for Upload files [1.0.1.0]
  • {feature} supports multiple sources via flow templates [by design]
    • Outlook.com
    • Microsoft Outlook 365
    • Microsoft Outlook 365 shared mailbox [1.1.0.10]
      • to achieve similar behavior one had to modify the first steps of the generated flow with the "When a new email arrives in a shared mailbox (V2)" trigger
    • SharePoint
    • OneDrive
    • OneDrive for business [1.0.1.0]
  • {feature} assign legal entity on the channel
    • the LE value is automatically assigned without applying additional derivation logic [1]
  • invoices sent via predefined channels are captured and appear on the 'Received files' page

Configuration groups 

  • allow managing the list of invoice fields and the manual review settings 
  • can be assigned for each LE or Vendors. 
    • all the legal entities in the same configuration group use the same invoice fields and manual review setting
  • default configuration group 
    • created after deployment, can't be changed or deleted

Fields

  • {feature} standard fields 
    • Legal entity 
      • organizations registered with legal authorities in D365 F&O and selected for Invoice capture
      • {feature} allow to enforce role-based security model
      • {feature} synchronized from D365 F&O to CRM 
      • {missing feature} split the invoices between multiple LEs
    • Vendor master 
      • individuals or organizations that supply goods or services
      • used to automatically derive the Vendor account
      • {feature} synchronized from D365 F&O to CRM
        • synchronization issues solved [1.6.0.x]
      • {feature}Vendor account can be derived from tax number [1.0.1.0]
    • Invoice header
      • {feature} Currency code can be derived from the currency symbol [1.0.1.0]
    • Invoice lines
    • Item master
      • {feature} Item number can be derived from External item number [1.0.1.0]
      • {feature} default the item description for procurement category item by the value from original document [1.1.0.32/10.0.39]
    • Expense types (aka Procurement categories) 
    • Charges
      • amounts added to the lines or header
      • {feature} header-level charges [1.1.0.10]
      • {missing feature}line-level charges 
    • {feature}Financial dimensions 
      • header level [1.1.0.32/10.0.39]
      • line level [1.3.0.x]
    • Purchase orders
      • {feature} PO formatting based on the number sequence settings
      • {missing feature}PO details
      • {missing feature} multiple POs, multiple receipts 
    • {missing feature} Project information integration
      • {workaround} the Project Id can be added as custom field on Invoice line for Cost invoices and with this the field should be mapped with the corresponding Data entity for transferring the value for D365 F&O
  • {feature} custom fields [1.1.0.32/10.0.39]

File filter

  • applies additional filtering to incoming files at the application level
  • with the installation a default updatable global file filter is provided
  • can be applied at different channel levels
  • {event} an invoice document is received
    • the channel is checked first for a file filter
    • if no file filter is assigned to the channel level, the file filter at the system level is used [1]
  • {configuration} maximum files size; 20 MB
  • {configuration} supported files types: PDF, PNG, JPG, JPEG, TIF, TIFF
  • {configuration} supported file names 
    • filter out files that aren't relevant to invoices [1]
    • rules can be applied to accept/exclude files whose name contains predefined strings [1]
Actions 

  • Import Invoice
    • {feature} Channel for file upload
      • a default channel is provided for directly uploading the invoice files
      • maximum 20 files can be uploaded simultaneously
  • Capture Invoice
    • {feature} Invoice capture processing
      • different derivation rules are applied to ensure that the invoices are complete and correct [1]
    • {missing feature} differentiate between relevant and non-relevant content}
    • {missing feature} merging/splitting files
      • {workaround} export the file(s), merge/split them, and import the result
  • Void files [1.0.1.0]
    • once the files voided, it is allowed to be deleted from Dataverse
      • saves storage costs
  • Classify Invoice
    • {feature} search for LEs
    • {feature} assign LE to Channel
      • AP clerks view only the invoices under the LE which are assigned to them
    • {feature} search for Vendor accounts by name or address 
  • Maintaining Headers
    • {feature} multiple sales taxes [1.1.0.26]
    • {missing feature} rounding-off [only in D365 F&O]
    • {feature} support Credit Notes [1.5.0.2]
  • Maintaining Lines
    • {feature} Add/remove lines
    • {feature} "Remove all" option for deleting all the invoice lines in Side-by-Side Viewer [1.1.0.26]
      • Previously it was possible to delete the lines one by one, which by incorrectly formatted big invoices would lead to considerable effort. Imagine the invoices from Microsoft or other cloud providers that contain 5-10 pages of lines. 
    • {feature}Aggregate multiple lines [requested]
      • Now all lines from an invoice have the same importance. Especially by big invoices, it would be useful to aggregate the amounts from multiple lines under one. 
    • {feature} Show the total amount across the lines [requested]
      • When removing/adding lines, it would be useful to compare the total amount across the lines with the one from the header. 
    • Check the UoM consistency between invoice line and linked purchase order line
      • For the Invoice to be correctly processed, the two values must match. 
    • Support for discounts [requested]
      • as workaround, discounts can be entered as separate lines
  • Transfer invoice

  • Automation
    • {parameter} Auto invoice cleanup
      •  automatically cleans up the transferred invoices and voided invoices older than 180 days every day [1]
    • Use continuous learning 
      • select this option to turn on the continuous learning feature
      • learns from the corrections made by the AP clerk on a previous instance of the same invoice [1]
        • records the mapping relationship between the invoice context and the derived entities [1]
        • the entities are automatically derived for the next time a similar invoice is captured [1]
      • {missing feature} standard way to copy the continuous learning from UAT to PROD
      • {feature} migration tool for continuous learning knowledge
        • allows us to transfer the learning knowledge from one environment to another [1.6.0.x]
    • Confidence score check
      • for the prebuilt model the confidence score is always the same 
        • its value is returned by the AI Builder service
        • confidence score can only be improved only when the customer prebuilt model is used 
        •  it can be increased by uploading more samples and do the tagging accordingly
        • a low confidence score is caused by the fact that not enough samples with the same pattern have been trained
      • {parameter} control the confidence score check [1.1.0.32/10.0.39]
  • Manage file filters
User Interface
  • Side-by-side view [by design]

  • History logs [1.0.1.0]
    • supported in Received files and Captured invoices
    • help AP clerks know the actions and results in each step during invoice processing 
  • Navigation to D365 F&O [1.0.1.0]
    • once the invoice is successfully transferred to D365 F&O, a quick link is provided for the AP clerk to open the Pending vendor invoice list in F&O
  • Reporting
    • {missing feature} consolidated overview across multiple environments (e.g. for licensing needs evaluation)
    • {missing features} metrics by Legal entity, Processing status, Vendor and/or Invoice type
      • {workaround} a paginated report can be built based on Dataverse

Data Validation

  • [Invoice Capture] derivation rules 
    • applied to ensure that the invoices are complete and correct [1]
    • {missing feature} derive vendor using the associated email address
    • {parameter} Format purchase order 
      • used to check the number sequence settings in D365 F&O to format the PO number [1]
    • {parameter} Derive currency code for cost invoice
      • used to derive the value from invoice master data in D365 F&O [1]
      • <-- the currency code on PO Invoices must be identical to the one on PO
    • {parameter} Validate total sales tax amount 
      • validates the consistency between the sales tax amount on the Sales tax card and the total sales tax amount, when there's a sales tax line [1]
    • {parameter} Validate total amount 
      • confirm alignment between the calculated total invoice amount and the captured total amount [1]
  • [D365 F&O] before workflow submissions [requested]
    • it makes sense to have out-of-box rules 
    • currently this can be done by implementing extensions
  • [D365 F&O] during workflow execution [by design]
Dynamics 365 for Finance [aka D365 F&O]
  • Attachments 
    • {parameter} control document type for persisting the invoice attachment in D365 F&O [1.1.0.32/10.0.39]
  • Invoice Capture
    • {parameter} select the entities in scope
    • {parameter}differentiate by Invoice type whether Vendor invoice or Invoice journal is used to book the invoices
    • {parameter} Transfer attachment
  • Fixed Assets
    • {missing feature} create Fixed asset automatically during the time the invoice is imported
  • Vendor Invoice Journal 
    • {missing feature} configure what journal the invoice is sent to
      • the system seems to pick the first journal available
      • {parameter} control journal name for creating 'Invoice journal'  [1.1.0.32/10.0.39]
    • {missing feature}grouping multiple invoices together in a journal (e.g., vendor group, payment terms, payment of method)
  •  Approval Workflow
    • {missing feature} involve Responsible person for the Vendor [requested]
    • {missing feature} involve Buyer for Cost invoices [requested]
    • {missing feature} differentiator between the various types of invoices [requested]
    • {missing feature} amount-based approval [requested]
  • Billing schedule
    • {feature} integration with Billing schedules
    • {feature} modify or cancel Billing schedules
  • Reporting
    • {missing feature} Vendor invoices hanging in the approval workflow (incl. responsible person for current action, respectively error message)
    • {missing feature} Report for GL reconciliation between Vendor invoice and GL via Billing schedules
    • {missing feature} Overview of the financial dimensions used (to identify whether further setup is needed)

Previous post <<||>> Next post

Resources:
[1] Microsoft Learn (2023) Invoice capture overview (link)
[2] Yammer (2023) Invoice Capture for Dynamics 365 Finance (link)
[3] Microsoft (2023) Invoice Capture for Dynamics 365 Finance - Implementation Guide

Acronyms:
AI - Artificial Intelligence 
AP - Accounts Payable
F&O - Finance & Operations
LE - Legal Entity
PO - Purchase Order
UoM- Unit of Measure

25 December 2023

💫ERP Systems: Microsoft Dynamics 365's Vendor Invoices Processing (Setup Areas)

ERP Systems

Invoices can be set up to be imported automatically in Dynamics 365 for Finance and Operation (D365 F&O) via Invoice Capture (see process). Besides the Invoice Capture-related set up, there are several areas related to Vendor invoice's processing:

General Ledger

Accounts for automatic transactions need to be defined at least for the 'Vendor invoice rounding off' and 'Rounding variance', respectively for the 'Exchange rate gain' and 'Exchange rate loss', if this wasn't done already.

To facilitate accounts' reconciliation and reporting, one can enable a Batch transfer rule for the Source document type 'Vendor invoice' and thus one journal is used for each Vendor invoice. This also makes sure that the Description from Invoice will be taken over in the Journal, which facilitates accounts' reconciliation.

Subscription Billing

In case one needs to defer the amounts over a time interval (e.g. x months), as it's the case for prepayments, the integration with Subscription Billing on the Vendor side in F&O seems to work with minimal configuration. A Billing schedule is created for each Invoice distribution and can be modified or cancelled after the Invoice is posted, if needed.

If your organization uses Subscription Billing also for Accounts Receivables (AR), one might need to compromise on the setup because the same parameters are used for both modules. 

In what concerns accounts' reconciliation, there seems to be a 1:1 mapping between the Schedule line and the General Ledger (GL) posting, a table with the mapping between the records being available. One can build thus a Paginated report to display the mapping between AP and GL, if the infrastructure is in place (e.g. by building a data lakehouse/warehouse based on F&O, see post). 

Fixed Assets

One can enable the creation of Fixed assets by checking the "Create asset during product receipt or invoice posting" radio button in Fixed assets parameters.

The feature "create the fixed asset automatically during the time of invoice import" for PO-based Invoices (with "Create Fixed Asset" flag set at line level) doesn't seem to be supported yet.

Vendor Invoice Journals

Before Invoice Capture, Vendor invoice journals were helpful for posting summarized cost invoices that are not associated with POs (e.g. expenses for supplies or services). One can still use this approach, however if the line-based details are important, then it makes sense to use Pending vendor invoice with Service items or Procurement categories.  

A single Vendor invoice is created as one Vendor invoice journal. 

Organization Administration

When using workflows, setting the same language for all LEs can reduce the amount of redundant information maintained in the workflow(s), otherwise the texts need to be provided for each language. 

Default descriptions can be enabled for Purchase orders' invoice ledger and vendor to carry the same description entered Pending Vendor Invoices in Invoices and Journals. The functionality works also for Cost invoices.

Ideally, Invoice's description should have been maintained in Invoice capture and/or Microsoft should have provided a default description also for it. 

System

Unless there's a requirement to post manually the journals from subledger to GL, a batch transfer for subledger journals must be created for each LE.

One can enable email notifications for the users participating in workflows and use workflow delegations for the intervals the respective users are on leave. 

All users participating in the workflow must be available also as active employees. It makes sense to do latest when the integration goes Live. Moreover, the Users need to have the appropriate permissions for the roles they have in the process.

Vendor Invoice Automation

There's further functionality available under the 'Vendor Invoice Automation' label (see [1], [2]), though the following are the most important ones: 

  • Automatically apply prepayments to vendor invoices
  • Automatically submit imported invoices to the workflow system.
  • Match product receipts to pending vendor invoice lines.

Recurring Vendor Invoice Templates

Microsoft introduced with 10.0.38 PU a new feature called 'Vendor invoice templates', which allows creating recurring vendor invoices without the need to enter all the vendor invoice information for each separate invoice (see [3]). There seems to be no information available whether any integration between this feature and Invoice Capture will be supported.

E-Invoicing

EU countries need to enforce the Directive 2014/55/EU for Vendor and Customer invoices. The directive requires that the electronic exchange of invoice documents between suppliers and buyers to occur over government-held third-party solutions. Each country has its own system(s) and regulations with different scope and timelines. Some countries have already requirements for 2024, respectively 2025 and there are similar projects in US and other countries. 

Even if Microsoft started this year (2023) to provide country-specific integrations for e-Invoicing, for the moment there seem no information available on how e-Invoicing will integrate with Invoice Capture.

Previous post <<||>> Next post

Resources:
[1] Microsoft Learn (2023) Vendor Invoice Automation (link)
[2]Microsoft Dynamics 365 (2023) The Future of Finance: Unlocking the Benefits of Accounts Payable Automation (link)
[3] Hylke Britstra (2023) Recurring vendor invoice templates (link)
[4] Dan Edwards (2023) AP Automation at DynamicsCon (link)

💫ERP Systems: Microsoft Dynamics 365's Invoice Capture (Setup Areas)

ERP Systems

When implementing the Invoice Capture process, there are several setup areas that need to be considered into the Power App (Channels & Mapping Rules), some that need to be done in Dynamics 365 for Finance and Operations (D365 F&O) and impact the integration with the Power App (Expense types/Procurement categories, Financial dimensions, Approval workflow), respectively setup that concerns only D365 F&O but still important. In addition, there are also overlapping features like Recurring Vendor Invoice Templates and E-Invoicing.

This post focuses on the first two types of setup areas, the rest following  in a second post

Channels

In the digitalization era, it's expected that most of the Vendor invoices are received by email or an integrated way (see e-Invoicing). However there can be also several exceptions (e.g. small companies operating mostly offline).

In Invoice Capture can be defined one or more channels for Outlook, SharePoint or OneDrive on which Invoices can be imported in the App. In a common scenario, depending on the number of mailboxes intended for use, one or more channels can be set up for Outlook. Power App listens on the respective channels for incoming emails in the Inbox, in a first step processes the emails by importing all the attachments (including signature images), while in a second step processes all the documents that look like an Invoice (including Receipts) and extracts the metadata to create the Invoice record. The Invoices received in hard-paper format can be scanned and sent to one of the internal mailboxes, respectively imported manually in Invoice Capture. 

More likely, there will be also a backlog that can be imported via SharePoint or OneDrive, which is more convenient than resending the backlog by email. In the end it should make no difference for the process which channels is used as long as the Invoices are processed in a timely manner. It should make also no difference if for example in UAT were used other channels than in Production. For testing purposes it might be advantageous to have more control over which Invoices are processed, while the UAT could follow the same setup as Prod, which is generally recommended.

Mapping Rules

The mapping rules allow on one side to set default values based on a matching string checked against several attributes (e.g. Company name, Address or Tax registration number for Legal entity, Item number for Item, Item description for Expense type). Secondly, they allow to also define a configuration for a rule, which defines what fields are mandatory, respectively which Invoice types are supported, etc.

The mapping rules will not cover all scenarios, though it's enough if they cover a good percentage from the most common cases. Therefore, over time they are also good candidates for further optimization. Moreover, because Invoice Capture remembers the values used before for an instance of the same Invoice, the mapping rules will be considered only for the first occurrence of the respective Invoice or whatever is new in its processing. 

If a channel was defined for each Legal entity, this seems to make obsolete the definition of mapping rules for it. Conversely, if the number of manual uploads is not neglectable, it still makes sense to define a mapping rule. 

Mapping rules for the Expense type seem to work well when Items' descriptions are general enough to include certain words (e.g. licenses, utilities). 

One can define mapping rules also for the Vendor accounts and Items, though it's questionable whether the effort makes sense as long as the internal Vendor names and Product numbers don't deviate from the ones used by the Vendor itself.

Expense Types

Invoice Capture requires that either the Item or the Expense type are provided on the line. For PO-based invoices, an Item should be available. Cost invoices can have Items as well and they can be used on the line, though from the point of view of the setup it might be easier to use Expense types. It's the question whether the information loss has any major impact on the business. There are also cases in which the lines don't bring any benefit and can be thus in Invoice capture deleted. 

At least for Cost invoices, the Expense types (aka Procurement categories in Dynamics 365) defined can considerably facilitate the automatic processing. D365 F&O can use the Procurement category to automatically populate the Main Account in the Invoice distributions. The value is used as default and can be overwritten, if needed. 

Having for example a 1:1 mapping between Procurement categories and Main accounts, respectively the same names can make easier the work of AP Clerks and facilitate the troubleshooting.

Conversely, one can define an additional level of detail (aka an additional segment) for reporting purposes. This implies that multiple categories will point to the same Main account, which can increase the overhead, though the complexity of the structure can be simplified by using maybe a good naming convention and a consolidated Excel list with the values. The overhead resumes mainly when dealing with the first instance of an Vendor invoice.

On Procurement categories can be defined also the default Item sales tax groups (a 1:1 mapping) which can be overwritten as well. For the categories with multiple Items sales tax groups, one should decide whether the benefit of providing a default value outweighs the effort for adding the value for each Invoice line. 

Defining upfront, before the Go Live, a good hierarchical structure for the Procurement categories and the mappings to the Main accounts, respectively to the Item sales tax groups can reduce the effort of maintaining the structure later and reduces the reporting overhead. 

Financial Dimensions & the Vendor Invoice Approval Workflow

Besides their general use, the Financial dimensions can be used to implement an approval process on D365 F&O side by configuring an expenditure reviewer (see [2]) and using it in the Vendor Invoice Approval workflow, respectively of setting up the financial owners for each dimension in scope. Different owners for the Financial dimensions can be defined for each Legal entity via Legal entity overrides. From what it seems, notifications are sent then to the override as well to the default owner. 

Starting with the 1.1.0.32 (07-Nov-2023) version of Invoice capture, respectively the 10.0.39 version of D365 F&O (planned for Apr-2024), 3 financial dimensions (Cost center, Department and Legal Entity) are supported directly in the App. This would allow us to cover the example covered in [2]). This reduces the need for maintaining the values in D365 F&O. 

Unfortunately, if the approval process needs to use further dimensions (e.g. Vendor, Location) or attributes (e.g. Invoice responsible person) in the approval, one needs either to compromise or find workarounds. If there's no Purchase order as in the case of Cost invoices, involving the actual Buyer is almost impossible. For such extreme cases one needs more flexibility in the approval process and hopefully Microsoft will extend the functionality behind it.

The approval processes needed by customers require occasionally a complexity that's not achievable with the functionality available in D365 for F&O. The customers are forced then either to compromise or use external tools (e.g. Power Automate) for building the respective functionality. 

One should consider defining default Financial dimensions on the Vendor, the respective values being used when generating the Invoice in D365 F&O. Defining Financial dimensions templates can help as well when the costs need to be split across different Financial dimensions based on percentages. 

In case the financial dimensions differ across a same Vendor's invoices, one can request from the Vendors to provide the respective information on the Invoice. 

Organization Administration

Cost invoices can work without providing a Unit of Measure (UoM) in Invoice capture, however one should consider using an UoM at least for aesthetic purposes in reporting. On the other side, this can complicate the setup if the same UoM is used for other purposes. 

Vendor Invoice Automation

To be able to process automatically the Vendor invoices once they arrived in D365 F&O, it would useful to maintain the attributes which are modified manually in D365 F&O directly in Invoice Capture. The usual fields are the following:

  • Invoice description (header)
  • Financial dimensions 
  • Sales item tax group

Master Data Management

There are master data attributes which even if they are not directly involved in the process, they could make process actor's life a bit easier. It can be the case of the Responsible person and/or the Buyer group, which would allow the AP Clerks to identify the Cost Center and Department related to the Invoice. Maintaining the external Item descriptions for Vendors can help as well in certain scenarios. 

Previous post <<||>> Next post

Resources:
[1] Microsoft Learn (2023) Invoice capture overview (link)
[2] Microsoft Learn (2023) Configure expenditure reviewers (link)

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.