Showing posts with label data quality. Show all posts
Showing posts with label data quality. Show all posts

28 March 2024

Data Management: Master Data Management [MDM] (Notes)

Disclaimer: This is work in progress intended to consolidate information from various sources. 
Last updated: 28-Mar-2024

Master Data Management (MDM)

  • {definition} the technologies, processes, policies, standards and guiding principles that enable the management of master data values to enable consistent, shared, contextual use across systems, of the most accurate, timely, and relevant version of truth about essential business entities [2],[3]
  • {goal} enable sharing of information assets across business domains and applications within an organization [4]
  • {goal} provide authoritative source of reconciled and quality-assessed master (and reference) data [4]
  • {goal} lower cost and complexity through use of standards, common data models, and integration patterns [4]
  • {driver} meeting organizational data requirements
  • {driver} improving data quality
  • {driver} reducing the costs for data integration
  • {driver} reducing risks 
  • {type} operational MDM 
    • involves solutions for managing transactional data in operational applications [1]
    • rely heavily on data integration technologies
  • {type} analytical MDM
    • involves solutions for managing analytical master data
    • centered on providing high quality dimensions with multiple hierarchies [1]
    • cannot influence operational systems
      • any data cleansing made within operational application isn’t recognized by transactional applications [1]
        • ⇒ inconsistencies to the main operational data [1]
      • transactional application knowledge isn’t available to the cleansing process
  • {type} enterprise MDM
    • involves solutions for managing both transactional and analytical master data 
      • manages all master data entities
      • deliver maximum business value
    • operational data cleansing
      • improves the operational efficiencies of the applications and the business processes that use the applications
    • cross-application data need
      • consolidation
      • standardization
      • cleansing
      • distribution
    • needs to support high volume of transactions
      • ⇒ master data must be contained in data models designed for OLTP
        • ⇐ ODS don’t fulfill this requirement 
  • {enabler} high-quality data
  • {enabler} data governance
  • {benefit} single source of truth
    • used to support both operational and analytical applications in a consistent manner [1]
  • {benefit} consistent reporting
    • reduces the inconsistencies experienced previously
    • influenced by complex transformations
  • {benefit} improved competitiveness
    • MDM reduces the complexity of integrating new data and systems into the organization
      • ⇒ increased flexibility and improves competitiveness
    • ability to react to new business opportunities quickly with limited resources
  • {benefit} improved risk management
    • more reliable and consistent data improves the business’s ability to manage enterprise risk [1]
  • {benefit} improved operational efficiency and reduced costs
    • helps identify business’ pain point
      • by developing a strategy for managing master data
  • {benefit} improved decision making
    • reducing data inconsistency diminishes organizational data mistrust and facilitates clearer (and faster) business decisions [1]
  • {benefit} more reliable spend analysis and planning
    • better data integration helps planners come up with better decisions
      • improves the ability to 
        • aggregate purchasing activities
        • coordinate competitive sourcing
        • be more predictable about future spending
        • generally improve vendor and supplier management
  • {benefit} regulatory compliance
    • allows to reduce compliance risk
      • helps satisfy governance, regulatory and compliance requirements
    • simplifies compliance auditing
      • enables more effective information controls that facilitate compliance with regulations
  • {benefit} increased information quality
    • enables organizations to monitor conformance more effectively
      • via metadata collection
      • it can track whether data meets information quality expectations across vertical applications, which reduces information scrap and rework
  • {benefit} quicker results
    • reduces the delays associated with extraction and transformation of data [1]
      • ⇒ it speeds up the implementation of application migrations, modernization projects, and data warehouse/data mart construction [1]
  • {benefit} improved business productivity
    • gives enterprise architects the chance to explore how effective the organization is in automating its business processes by exploiting the information asset [1]
      • ⇐ master data helps organizations realize how the same data entities are represented, manipulated, or exchanged across applications within the enterprise and how those objects relate to business process workflows [1]
  • {benefit} simplified application development
    • provides the opportunity to consolidate the application functionality associated with the data lifecycle [1]
      • ⇐ consolidation in MDM is not limited to the data
      • ⇒ provides a single functional to which different applications can subscribe
        • ⇐ introducing a technical service layer for data lifecycle functionality provides the type of abstraction needed for deploying SOA or similar architectures
  • factors to consider for implementing an MDM:
    • effective technical infrastructure for collaboration [1]
    • organizational preparedness
      • for making a quick transition from a loosely combined confederation of vertical silos to a more tightly coupled collaborative framework
      • {recommendation} evaluate the kinds of training sessions and individual incentives required to create a smooth transition [1]
    • metadata management
      • via a metadata registry 
        • {recommendation} sets up a mechanism for unifying a master data view when possible [1]
        • determines when that unification should be carried out [1]
    • technology integration
      • {recommendation} diagnose what technology needs to be integrated to support the process instead of developing the process around the technology [1]
    • anticipating/managing change
      • proper preparation and organization will subtly introduce change to the way people think and act as shown in any shift in pattern [1]
      • changes in reporting structures and needs are unavoidable
    • creating a partnership between Business and IT
      • IT roles
        • plays a major role in executing the MDM program[1]
      • business roles
        • identifying and standardizing master data [1]
        • facilitating change management within the MDM program [1]
        • establishing data ownership
    • measurably high data quality
    • overseeing processes via policies and procedures for data governance [1]
  • {challenge} establishing enterprise-wide data governance
    • {recommendation} define and distribute the policies and procedures governing the oversight of master data
      • seeking feedback from across the different application teams provides a chance to develop the stewardship framework agreed upon by the majority while preparing the organization for the transition [1]
  • {challenge} isolated islands of information
    • caused by vertical alignment of IT
      • makes it difficult to fix the dissimilarities in roles and responsibilities in relation to the isolated data sets because they are integrated into a master view [1]
    • caused by data ownership
      • the politics of information ownership and management have created artificial exclusive domains supervised by individuals who have no desire to centralize information [1]
  • {challenge} consolidating master data into a centrally managed data asset [1]
    • transfers the responsibility and accountability for information management from the lines of business to the organization [1]
  • {challenge} managing MDM
    • MDM should be considered a program and not a project or an application [1]
  • {challenge} achieving timely and accurate synchronization across disparate systems [1]
  • {challenge} different definitions of master metadata 
    • different coding schemes, data types, collations, and more
      • ⇐ data definitions must be unified
  • {challenge} data conflicts 
    • {recommendation} resolve data conflicts during the project [5]
    • {recommendation} replicate the resolved data issues back to the source systems [5]
  • {challenge} domain knowledge 
    • {recommendation} involve domain experts in an MDM project [5]
  • {challenge} documentation
    • {recommendation} properly document your master data and metadata [5]
  • approaches
    • {architecture} no central MDM 
      • isn’t a real MDM approach
      • used when any kind of cross-system interaction is required [5]
        • e.g. performing analysis on data from multiple systems, ad-hoc merging and cleansing
      • {drawback} very inexpensive at the beginning; however, it turns out to be the most expensive over time [5]
    • {architecture} central metadata storage 
      • provides unified, centrally maintained definitions for master data [5]
        • followed and implemented by all systems
      • ad-hoc merging and cleansing becomes somewhat simpler [5]
      • does not use a specialized solution for the central metadata storage [5]
        • ⇐ the central storage of metadata is probably in an unstructured form 
          • e.g. documents, worksheets, paper
    • {architecture} central metadata storage with identity mapping 
      • stores keys that map tables in the MDM solution
        • only has keys from the systems in the MDM database; it does not have any other attributes [5]
      • {benefit} data integration applications can be developed much more quickly and easily [5]
      • {drawback} raises problems in regard to maintaining master data over time [5]
        • there is no versioning or auditing in place to follow the changes [5]
          • ⇒ viable for a limited time only
            • e.g. during upgrading, testing, and the initial usage of a new ERP system to provide mapping back to the old ERP system
    • {architecture} central metadata storage and central data that is continuously merged 
      • stores metadata as well as master data in a dedicated MDM system
      • master data is not inserted or updated in the MDM system [5]
      • the merging (and cleansing) of master data from source systems occurs continuously, regularly [5]
      • {drawback} continuous merging can become expensive [5]
      • the only viable use for this approach is for finding out what has changed in source systems from the last merge [5]
        • enables merging only the delta (new and updated data)
      • frequently used for analytical systems
    • {architecture} central MDM, single copy 
      • involves a specialized MDM application
        • master data, together with its metadata, is maintained in a central location [5]
        • ⇒ all existing applications are consumers of the master data
      • {drawback} upgrade all existing applications to consume master data from central storage instead of maintaining their own copies [5]
        • ⇒ can be expensive
        • ⇒ can be impossible (e.g. for older systems)
      • {drawback} needs to consolidate all metadata from all source systems [5]
      • {drawback} the process of creating and updating master data could simply be too slow [5]
        • because of the processes in place
    • {architecture} central MDM, multiple copies 
      • uses central storage of master data and its metadata
        • ⇐ the metadata here includes only an intersection of common metadata from source systems [5]
        • each source system maintains its own copy of master data, with additional attributes that pertain to that system only [5]
      • after master data is inserted into the central MDM system, it is replicated (preferably automatically) to source systems, where the source-specific attributes are updated [5]
      • {benefit} good compromise between cost, data quality, and the effectiveness of the CRUD process [5]
      • {drawback} update conflicts
        • different systems can also update the common data [5]
          • ⇒ involves continuous merges as well [5]
      • {drawback} uses a special MDM application
Acronyms:
MDM - Master Data Management
ODS - Operational Data Store
OLAP - online analytical processing
OLTP - online transactional processing
SOA - Service Oriented Architecture

References:
[1] The Art of Service (2017) Master Data Management Course 
[2] DAMA International (2009) "The DAMA Guide to the Data Management Body of Knowledge" 1st Ed.
[3] Tony Fisher 2009 "The Data Asset"
[4] DAMA International (2017) "The DAMA Guide to the Data Management Body of Knowledge" 2nd Ed.
[5] Dejan Sarka et al (2012) Exam 70-463: Implementing a Data Warehouse with Microsoft SQL Server 2012 (Training Kit)

20 March 2024

Data Management: Master Data Management (Part I: Understanding Integration Challenges) [Answer]

Data Management
Data Management Series

Answering Piethein Strengholt’s post [1] on Master Data Management’s (MDM) integration challenges, the author of "Data Management at Scale".

Master data can be managed within individual domains though the boundaries must be clearly defined, and some coordination is needed. Attempting to partition the entities based on domains doesn’t always work. The partition needs to be performed at attribute level, though even then might be some exceptions involved (e.g. some Products are only for Finance to use). One can identify then attributes inside of the system to create the boundaries.

MDM is simple if you have the right systems, processes, procedures, roles, and data culture in place. Unfortunately, people make it too complicated – oh, we need a nice shiny system for managing the data before they are entered in ERP or other systems, we need a system for storing and maintaining the metadata, and another system for managing the policies, and the story goes on. The lack of systems is given as reason why people make no progress. Moreover, people will want to integrate the systems, increasing the overall complexity of the ecosystem.

The data should be cleaned in the source systems and assessed against the same. If that's not possible, then you have the wrong system! A set of well-built reports can make data assessment possible. 

The metadata and policies can be maintained in Excel (and stored in SharePoint), SharePoint or a similar system that supports versioning. Also, for other topics can be found pragmatic solutions.

ERP systems allow us to define workflows and enable a master data record to be published only when the information is complete, though there will always be exceptions (e.g., a Purchase Order must be sent today). Such exceptions make people circumvent the MDM systems with all the issues deriving from this.

Adding an MDM system within an architecture tends to increase the complexity of the overall infrastructure and create more bottlenecks. Occasionally, it just replicates the structures existing in the target system(s).

Integrations are supposed to reduce the effort, though in the past 20 years I never saw an integration to work without issues, even in what MDM concerns. One of the main issues is that the solutions just synchronized the data without considering the processual dependencies, and sometimes also the referential dependencies. The time needed for troubleshooting the integrations can easily exceed the time for importing the data manually over an upload mechanism.

To make the integration work the MDM will arrive to duplicate the all the validation available in the target system(s). This can make sense when daily or weekly a considerable volume of master data is created. Native connectors simplify the integrations, especially when it can handle the errors transparently and allow to modify the records manually, though the issues start as soon the target system is extended with more attributes or other structures.

If an organization has an MDM system, then all the master data should come from the MDM. As soon as a bidirectional synchronization is used (and other integrations might require this), Pandora’s box is open. One can define hard rules, though again, there are always exceptions in which manual interference is needed.

Attempting an integration of reference data is not recommended. ERP systems can have hundreds of such entities. Some organizations tend to have a golden system (a copy of production) with all the reference data. It works for some time, until people realize that the solution is expensive and time-consuming.

MDM systems do make sense in certain scenarios, though to get the integrations right can involve a considerable effort and certain assumptions and requirements must be met.

Previous Post <<||>> Next Post

References:
[1] Piethein Strengholt (2023) Understanding Master Data Management’s Integration Challenges (link)


12 February 2024

Business Intelligence: A One Man Show I (Some Personal Background and a Big Thanks!)

Business Intelligence Series
Business Intelligence Series

Over the past 24 years, I found myself often in the position of a "one man show" doing almost everything in the data space from requirements gathering to development, testing, deployment, maintenance/support (including troubleshooting and optimization), and Project Management, respectively from operations to strategic management, when was the case. Of course, different tasks of varying complexity are involved! Developing a SSRS or Power BI report has a smaller complexity than developing in the process also all or parts of the Data Warehouse, or Lakehouse nowadays, respectively of building the whole infrastructure needed for reporting. All I can say is that "I've been there, I've done that!". 

Before SSRS became popular, I even built for a customer a whole reporting solution based on SQL Server, HTML & XML, respectively COM+ objects for database access. UI’s look-and-feel was like SSRS, though there was no wizardry involved besides the creative use of programming and optimization techniques. Once I wrote an SQL query, the volume of work needed to build a report was comparable to the one in SSRS. It was a great opportunity to use my skillset, working previously as a web developer and VB/VBA programmer. I worked for many years as a Software Engineer, applying the knowledge acquired in the field whenever it made sense to do so, working alone or in a team, as the projects required.

During this time, I was involved in other types of projects and activities that had less to do with the building of reports and warehouses. Besides of the development of various desktop, web, and data-processing solutions, I was also involved in 6-8 ERP implementations, being responsible for the migration of data, building the architectures needed in the process, supporting key users in various areas like Data Quality or Data Management. I also did Project Management, Application Management, Release and Change Management, and even IT Management. Thus, there were at times at least two components involved - one component was data-related, while the other component had more diversity. It was a good experience, because the second component often needed knowledge of the first, and vice versa. 

For example, arriving to understand the data model and business processes behind an ERP system by building ad-hoc and standardized reports, allowed me to get a good understanding of what data is needed for a Data Migration, which are the dependencies, or the level of quality needed. Similarly, the knowledge acquired by building ETL-based pipelines and data warehouses allowed me to design and build flexible Data Migration solutions, both architectures being quite similar from many perspectives. Knowledge of the data models and architectures involved can facilitate the overall process and is a premise for building reliable performant solutions. 

Similar examples can also be given in Data Management, Data Operations, Data Governance, during and post-implementation ERP support, etc. Reports and data are needed also in the Management areas - it starts from knowing what data are needed in the supporting processes for providing transparency, of getting insights and bringing the processes under control, if needed.

Working alone, being able to build a solution from the beginning to the end was often a job requirement. This doesn't imply that I was a "lone wolf". The nature of a data professional or software engineer’s job requires you to interact with various businesspeople from report requesters to key users, internal and external consultants, intermediary managers, and even upper management. There was also the knowledge of many data professionals involved indirectly – the resources I used to learn from - books, tutorials, blogs, webcasts, code, and training material. I'm thankful for their help over all these years!

Previous Post <<||>> Next Post

21 August 2022

ERP Implementations: It’s all about Partnership II

When starting an ERP implementation project an organization needs to fill the existing knowledge gaps in respect to whatever it takes to achieve the goals associated with the respective project. Therefore, it makes sense to work with a implementer that can help cover the gaps directly or indirectly. Moreover, it makes sense to establish a long-term relationship that would allow to harness ERP system’s capabilities after project’s end, increase the ROI and, why not, find other areas of cooperation. It’s in theory what a partner does, and a strategic technology partnership is about – providing any kind of technological expertise the customer doesn't have in-house. 

Unfortunately, from being a ‘service provider’ to becoming a ‘partner’ is a challenging road for many organizations, especially when this type of relationship is not understood and managed accordingly. Partnership’s management may resume in defining common goals, principles, values and processes, establishing a communication strategy and a common understanding of the challenges and the steps ahead, providing visibility into the cost estimates, billing, resources’ availability and utilization. Addressing these aspects would offer a framework on which the partnerships can nourish. Without considering these topics, the implementer remains just a 'service provider', no matter of the names used to characterize the relationship. 

Now, the use of the word ‘partner’ would make someone think that only one partner is considered, typically a big to middle-sized organization that would have this kind of resources. The main reason behind this reasoning is that the number of functional areas and volume of skillset required for filling the requirements of an implementation are high compared with other projects, the resources needing to be available on-demand without affecting the other constraints: costs, quality, time. This can be challenging, therefore can be met scenarios in which two or more external organizations are involved in the partnership, ideally organizations that complement each other. 

It is common in ERP implementations to appeal also to individual consultants for specific areas or the whole project. The principles and values of a partnership, as well the framework behind, can be applied to individual consultants as well. Independently of resources’ provenience more important is the partnership ‘mindset’ - being together in the same boat, working together on a shared and understood strategy, with clear goals and objectives.

Moreover, the people participating in the project must have a ‘partner's mindset’ as well. Without this, the project will likely get different impulses in the wrong direction(s), as a group’s interests will take priority over the ones of the organization. Ideally, this mindset should extend to the whole organization as topics like Data Quality and Process Improvement must be an organization’s effort, deep imprinted in organization’s culture.

More like ever, it’s important for the business to see and treat the IT department as a ‘partner’ and not as a ‘service provider’ by providing the needed level of transparency in requirements, issues, practices and processes, by treating the IT department as equal party in the decision-making and addressing its current and future strategical requirements. Ideally, this partnership should happen long before the implementation starts, given that it takes time for mentalities and practices to change, for knowledge to be acquired and used appropriately. 

Building a partnership takes time, effort and strategic thinking, this on top of the actual implementation, increasing thus the overall complexity, at least at the beginning. Does it pay off? Like in a marriage, it’s useful to have somebody you can trust, who knows you, whom you can rely upon, and talk with to find solutions. However, only time will tell whether such expectations are met and kept till the end. 

Previous <||> Next

20 March 2021

Business Intelligence: New Technologies, Old Challenges II (ETL vs. ELT)

 

Business Intelligence

Data lakes and similar cloud-based repositories drove the requirement of loading the raw data before performing any transformations on the data. At least that’s the approach the new wave of ELT (Extract, Load, Transform) technologies use to handle analytical and data integration workloads, which is probably recommendable for the mentioned cloud-based contexts. However, ELT technologies are especially relevant when is needed to handle data with high velocity, variance, validity or different value of truth (aka big data). This because they allow processing the workloads over architectures that can be scaled with workloads’ demands.

This is probably the most important aspect, even if there can be further advantages, like using built-in connectors to a wide range of sources or implementing complex data flow controls. The ETL (Extract, Transform, Load) tools have the same capabilities, maybe reduced to certain data sources, though their newer versions seem to bridge the gap.

One of the most stressed advantages of ELT is the possibility of having all the (business) data in the repository, though these are not technological advantages. The same can be obtained via ETL tools, even if this might involve upon case a bigger effort, effort depending on the functionality existing in each tool. It’s true that ETL solutions have a narrower scope by loading a subset of the available data, or that transformations are made before loading the data, though this depends on the scope considered while building the data warehouse or data mart, respectively the design of ETL packages, and both are a matter of choice, choices that can be traced back to business requirements or technical best practices.

Some of the advantages seen are context-dependent – the context in which the technologies are put, respectively the problems are solved. It is often imputed to ETL solutions that the available data are already prepared (aggregated, converted) and new requirements will drive additional effort. On the other side, in ELT-based solutions all the data are made available and eventually further transformed, but also here the level of transformations made depends on specific requirements. Independently of the approach used, the data are still available if needed, respectively involve certain effort for further processing.

Building usable and reliable data models is dependent on good design, and in the design process reside the most important challenges. In theory, some think that in ETL scenarios the design is done beforehand though that’s not necessarily true. One can pull the raw data from the source and build the data models in the target repositories.

Data conversion and cleaning is needed under both approaches. In some scenarios is ideal to do this upfront, minimizing the effect these processes have on data’s usage, while in other scenarios it’s helpful to address them later in the process, with the risk that each project will address them differently. This can become an issue and should be ideally addressed by design (e.g. by building an intermediate layer) or at least organizationally (e.g. enforcing best practices).

Advancing that ELT is better just because the data are true (being in raw form) can be taken only as a marketing slogan. The degree of truth data has depends on the way data reflects business’ processes and the way data are maintained, while their quality is judged entirely on their intended use. Even if raw data allow more flexibility in handling the various requests, the challenges involved in processing can be neglected only under the consequences that follow from this.

Looking at the analytics and data integration cloud-based technologies, they seem to allow both approaches, thus building optimal solutions relying on professionals’ wisdom of making appropriate choices.

Previous Post <<||>>Next Post

04 February 2021

Data Migrations (DM): Conceptualization VII (Data Import Layer)

Data Migration
Data Migrations Series

The data requirements for the Data Migration (DM) and Data Quality (DQ) are driven by the processes implemented in the target system(s). Therefore, a good knowledge of these requirements can decrease the effort needed for these two subprojects considerably. The needed knowledge basis starts with the entities and their attributes, the dependencies existing between them and the various rules that apply, and ends with the parametrization requirements, respectively the architecture(s) that can be used to import the data.

The DM process starts with defining the entities in scope and their attributes, respectively identifying the corresponding entities and attributes from the legacy systems. The attributes not having a correspondent in the legacy system need to be provided by the business and integrated in the DM logic. In addition, it’s needed to consider also the attributes needed by the business and not available in the target system, some of them more likely available in the legacy systems. For such attributes is needed either to misuse an attribute from the target or to extend the target system.

For each entity is created a data mapping that basically documents the data transformations needed for migrating the data. In the process is needed to consider also attributes’ data types, the (standard) formatting, their domain of definition, as well the various rules that apply. Their implementation belongs into the DM layer from which the data are exported in a standard format as needed by the target system.

Exporting the data from the DM layer directly into the target system’s tables has in theory the lowest overhead even if the rejected records are difficult to track, the rejections resulting only from records’ ‘validation against database’s schema. For this approach to work, one must have a good knowledge of the database schema and of the business rules implemented into the target system.

To solve the issue with errors’ logging, systems have a further layer on top of the database model, which also allow running data validation against target system’s business rules. Modern import frameworks allow loading the data via a set of standard files with a predefined structure. The data can be thus imported manually or via load jobs into the system a log with the issues being generated in the process. Some frameworks allow even the manual editing of failed records, respectively to import the data. Unfortunately, calling the layer from the DM layer is not possible from a database, though this would bring seldom a benefit. Some third-party tools attempt to improve the import functionality by calling the target system’s import layer.

The import files must be generated from the DM layer in the required structure with the appropriate formatting. The challenge however resides in identifying all the attributes that should make scope of the load. It’s an iterative process which sometimes is backed by try-and-error heuristics. Unless target system’s validation rules are known beforehand, the rules need to be discovered in this process, which can prove time-consuming. The discoveries need to be integrated also in the DM and from here results the big number of changes that need to be performed.

Given the dependencies existing between entities the files need to be generated and loaded in a predefined order. These dependencies are reflected also in the data processing and the validation rules considered in the DM layer.

A quality checkpoint can be implemented between the export from the DM layer and import to enforce the four-eyes principle. It’s normally the last opportunity for trapping the eventual issues. A further quality check is performed after import by validating on whether the data were imported as expected.

Previous Post <<||>> Next Post

Data Migrations (DM): Conceptualization VI (Data Migration Layer)

Data Migration
Data Migrations Series

Besides migrating the master and transactional data from the legacy systems there are usually three additional important business requirements for a Data Migration (DM) – migrate the data within expected timeline, with minimal disruption for the business, respectively within expected quality levels. Hence, DM’ timeline must match and synchronize with main project’s timeline in terms of main milestones, though the DM needs to be executed typically within a small timeframe of a few days during the Go-Live. In what concerns the third requirement, even if the data have high quality as available in the source systems or provided by the business, there are aspects like integration and consistency that rely primarily on the DM logic.

To address these requirements the DM logic must reach a certain level of performance and quality that allows importing the data as expected. From project’s beginning until UAT the DM team will integrate the various information iteratively, will need to test the changes several times, troubleshoot the deviations from expectations. The volume of effort required for these activities can be overwhelming. It’s not only important for the whole solution to be performant but each step must be designed so that besides fast execution, the changes and troubleshooting must involve a minimum of overhead.

For better understanding the importance, imagine a quest game in which the character has to go through a labyrinth with traps. If the player made a mistake he’ll need to restart from a certain distant point in time or even from the beginning. Now imagine that for each mistake he has the possibility of going one step back try a new option and move forward. For some it may look like cheating though in this way one can finish the game relatively quickly. It would be great if executing a DM could allow the same flexibility.

Unfortunately, unless the data are stored between steps or each step is a different package, an ETL solution doesn’t provide the flexibility of changing the code, moving one step behind, rerunning the step and performing troubleshooting, and this over and over again like in the quest game. To better illustrate the impact of such approach let’s consider that the DM has about 40 entities and one needs to perform on average 20 changes per entity. If one is able to move forwards and backwards probably each change will take about a few minutes to execute the code. Otherwise rerunning a whole package can take 5-10 times or even more as this can depend on packages’ size and data volume. For 800 changes only an additional minute per change equates with 800 minutes (about 13 hours).

In exchange, storing the data for an entity in a database for the important points of the processing and implementing the logic as a succession of SQL scripts allows this flexibility. The most important downside is that the steps need to be executed manually though this is a small price to pay for the flexibility and control gained. Moreover, with a few tricks one can load deltas as in the case of a phased DM.

To assure that the consistency of the data is kept one needs to build for each entity a set of validation queries that check for duplicates, for special cases, for data integrity, incorrect format, etc. The queries can be included in the sequence of logic used for the DM. Thus, one can react promptly to each unexpected value. When required, the validation rules can be built within reports and used in the data cleaning process by users, or even logged periodically per entity for tracking the progress.

Previous Post <<||>> Next Post

01 February 2021

Data Migrations (DM): Quality Acceptance Criteria IV

Data Migration
Data Migrations Series

Reliability

Reliability is the degree to which a solution performs its intended functions under stated conditions without failure. In other words, a DM is reliable if it performs what was intended by design. The data should be migrated only when migration’s reliability was confirmed by the users as part of the sign-off process. The dry-runs as well the final iteration for the UAT have the objective of confirming solution’s reliability.

Reversibility

Reversibility is the degree to which a solution can return to a previous state without starting the process from the beginning. For example, it should be possible to reverse the changes made to a table by returning to the previous state. This can involve having a copy of the data stored respectively deleting and reloading the data when necessary. 

Considering that the sequence in which the various activities is fix, in theory it’s possible to address reversibility by design, e.g. by allowing to repeat individual steps or by creating rollback points. Rollback points are especially important when loading the data into the target system. 

Robustness

Robustness is the degree to which the solution can accommodate invalid input or environmental conditions that might affect data’s processing or other requirements (e,g. performance). If the logic can be stabilized over the various iterations, the variance in data quality can have an important impact on a solutions robustness. One can accommodate erroneous input by relaxing schema’s rules and adding further quality checks.

Security 

Security is the degree to which the DM solution protects the data so that only authorized people have access to the respective data to the defined level of authorization as data are moved through the solution. The security provided by a solution needs to be considered against the standards and further requirements defined within the organization. In case no such standards are available, one can in theory consider the industry best practices.

Scalability

Scalability is the degree to which the solution is able to respond to an increased workload.  Given that the number of data considered during the various iterations vary in volume, a solution’s scalability needs to be considered in respect to the volume of data to be migrated.  

Standardization

Standardization is the degree to which technical standards were implemented for a solution to guarantee certain level of performance or other aspects considered as import. There can be standards for data storage, processing, access, transportation, or other aspects associated with the migration processes. Moreover, especially when multiple DMs are in scope, organizations can define a set of standards and guidelines that should be further considered.  

Testability

Testability is the degree to which a solution can be tested in the respect to the set of functional and data-related requirements. Even if for the success of a migration are important the data in their final form, to achieve that is needed to validate the logic and test thoroughly the transformations performed on the data. As the data go trough the data pipelines, they need to be tested in the critical points – points where the data suffer important transformations. Moreover, one can consider record counters for the records processed in each such critical point, to assure that no record was lost in the process.  

Traceability

Traceability is the degree to which the changes performed on the data can be traced from the target to the source systems as record, respectively at entity level. In theory, it’s enough to document the changes at attribute level, though upon case it might needed to document also the changes performed on individual values. 

Mappings at attribute level allow tracing the data flow, while mappings at value level allow tracing the changes occurrent within values. 

09 January 2021

ERP Implementations: It’s all about Planning

ERP Implementation

Ideally the total volume of work can be uniformly distributed for all project’s duration though in praxis the curve representing the effort has the form of a wave or aggregation of waves that tend to reach the peak shortly before or during the Go-Live(s). More important, higher fluctuations occur in the various areas of the project on whole project’s duration, as there are dependencies between the various functional areas, as one needs to wait for decisions to be made, people are not available, etc. Typically, the time wasted on waiting, researching or other non-value-added activities have the potential of leading to such peaks. Therefore, the knowledge must be available, and decisions must be taken when required, which can be challenging but not impossible to achieve. 

To bridge the time wasted on waiting, the team members need to work on other topics. If on customer’s side the resources can handle maybe other activities, on vendor’s side the costs can be high and proportional with the volume of waiting. Therefore, vendor’s resources must be involved at least in two projects or do work in other areas in advance, which is not always possible. However, when vendor’s resources are involved in two or more projects, unless the planning is perfect or each resource can handle the work as it comes, there are further waiting times added. The customer is then forced either to book the resources exclusively, or to wait and carry the costs associated with it. 

On the other side ERP Implementations tend to become exploration projects, especially when the team has only partial knowledge about the system, or the requirements have a certain degree of specialization that deviates from the standard processes. The more unknowns an ERP implementation has, the more difficult is to plan. To be able to plan one must know the activities ahead, how long they take, and of course, one must adhere to the delivery dates, because each delay can have a cascading effect that can impact project’s schedule considerably. 

Probably the best approach to planning is to group the activities into packages and plan the packages, being in each subteam’s duty to handle the planning for each package, following to manage at upper level only the open issues, risks or opportunities. This shifts the burden from Project Manager’s shoulders within the project. Moreover, even if in theory the plan can consider each activity, it will become obsolete as soon it’s updated given the considerable volume of work requested to maintain it. Periodically, one can still revise the whole plan to identify opportunities and risks. What the team can do is to plan for a certain time interval (e.g. 4-6 weeks) and build from there. This allows focusing on the most important activities. 

To further shift the burden, activities like Data Migration, Data Cleaning or the integrations with third party systems should be treated when possible as subprojects. Despite having interdependencies with the main project (e.g. parameters, master data, decisions) and share same resources, they have their own schedule whose deadlines need to be aligned with main project’s milestones. 

Unless the team and business put all effort to respect the plan and, as long the plan is realistic, the initial plan can seldom be respected – it’s anyway just a sketch of the road ahead that can change as the project progresses – and this aspect needs to be understood by the business otherwise will lead to false expectations. On the other side, the team must try respecting the deadlines and communicate in time inability to do so. It’s an interplay in which communication is more important than ever.

Previous Post <<||>> Next Post


31 October 2020

Data Warehousing: Data Lakes & other Puddles

Data Warehousing

One can consider a data lake as a repository of all of an organization’s data found in raw form, however this constraint might be too harsh as the data found at different levels of processing can be imported as well, for example the results of data mining or other Data Science techniques/methods can be considered as raw data for further processing.

In the initial definition provided by James Dixon, the difference between a data lake and a data mart/warehouse was expressed metaphorically as the transition from bottled water to lakes streamed (artificially) from various sources. It’s contrasted thus the objective-oriented, limited and single-purposed role of the data mart/warehouse in respect to the flow of data in nature that could be tapped and harnessed as desired. These are though metaphors intended to sensitize the buyer. Personally, I like to think of the data lake as an extension of the data infrastructure, in which the data mart or warehouse is integrant part. Imposing further constrains seem to have no benefit.  

Probably the most important characteristic of a data lake is that it makes the data of an organization discoverable and consumable, though from there to insight and other benefits is a long road and requires specific knowledge about the techniques used, as well about organization’s processes and data. Without this data lake-based solutions can lead to erroneous results, same as mixing several ingredients without having knowledge about their usage can lead to cooking experiments aloof from the art of cooking.

A characteristic of data is that they go through continuous change and have different timeliness, respectively degrees of quality in respect to the data quality dimensions implied and sources considered. Data need to reflect the reality at the appropriate level of detail and quality required by the processing application(s), this applying to data warehouses/marts as well data lake-based solutions.

Data found in raw form don’t necessarily represent the true/truth and don’t necessarily acquire a good quality no matter how much they are processed. Solutions need to be resilient in respect to the data they handle through their layers, independently of the data quality and transmission problems. Whether one talks about ETL, data migration or other types of data processing, keeping the data integrity at various levels and layers can be maybe the most important demand upon solutions.

Snapshots as moment-in-time recordings of tables, entities, sets of entities, datasets or whole databases, prove to be often the best mechanisms in keeping data integrity when this aspect is essential to their processing (e.g. data migrations, high-accuracy measurements). Unfortunately, the more systems are involved in the process and the broader span of the solutions over the sources, the more difficult it become to take such snapshots.

A SQL query’s output represents a snapshot of the data, therefore SQL-based solutions are usually appropriate for most of the business scenarios in which the characteristics of data (typically volume, velocity and/or variety) make their processing manageable. However, when the data are extracted by other means integrity is harder to obtain, especially when there’s no timestamp to allow data partitioning on a time scale, the handling of data integrity becoming thus in extremis a programmer’s task. In addition, getting snapshots of the data as they are changed can be a costly and futile task.

Further on, maintaining data integrity can prove to be a matter of design in respect not only to the processing of data, but also in respect to the source applications and the business processes they implement. The mastery of the underlying principles, techniques, patterns and methodologies, helps in the process of designing the right solutions.

Note:
Written as answer to a Medium post on data lakes and batch processing in data warehouses. 

29 September 2020

Strategic Management: Simplicity V (ERP Implementations' Story I)

Strategic Management

Probably ERP Implementations are one of the most complex type of projects one deals with in the IT world, however their complexity seldom resides in technologies themselves, but in the effort that needs to be made by organizations before, during and post-implementations. Through their transformative nature ERP implementations have the potential of changing the whole organization if their potential is exploited accordingly, which is unfortunately not always the case. Therefore, the challenges don’t resume only to managing a project or implementing a technology, but also in managing change, and that usually happens or needs to happen at several levels. 

Typically, the change is considered mainly at IT infrastructure and processual level, because at these levels most of the visible changes happen – that’s what steals the show. For the whole project duration is about replacing one or more legacy systems, making sure that the new infrastructure works as expected. The more an organization deviates from the standard the more effort is needed, and this effort can exhaust an organization’s resources to the degree that will need some time to recover after that, financially, but maybe more important from a vital point of view.

Even if the technological and processual layers are important, as they form the foundation on which an organization builds upon, besides the financial and material flow there are also the data, informational and knowledge flows, which seems to be neglected. Quite often that’s where the transformational potential resides. If an organization is not able to change positively these flows, on the long term the implementation will deal with problems people wished to be addressed much earlier, when the effort and effect would have met the lowest resistance, respectively the highest impact. 

An ERP implementation involves the migration of data between source(s) and target(s), the data requirements, including the one of appropriate quality, being regarded in respect to the target system(s). As within the data migration steps the data are extracted from the various sources, enriched, and prepared for import into the target system(s), there is the potential of bringing data quality to a level which would help the organization further. It’s probably simpler to imagine the process of taking the data from one place, cleaning and enriching the data to bring it to the needed form, and then putting the data into the new system. It’s a unique chance of improving data quality without touching the source or target system(s) while getting a considerable value.

Unfortunately, many organizations’ efforts to improve the quality of their data stop after the implementation. If there’s no focus and there are no structures in place to continue the effort, sooner or later data’s quality will decrease despite the earlier made efforts. Investing for example in a long-term data quality improvement or even a data management initiative might prove to be an exploratory and iterative process in which mistakes are maybe made, the direction might need to be changed, though, as long learning is involved, in this often resides the power of changing for the better.

When one talks about information there are two aspects to it: how an organization arrives from data to actionable information that reach timely the people who need it, respectively how information is further aggregated, recombined, shared, and harnessed into knowledge. These are the first three layers of knowledge (aka DIKW) pyramid, and an organization’s real success story is in how can manage these flows together, while increasing the value they provide for the organization. It’s an effort that must start with the implementation itself, or even earlier, and continue after the implementation, as an organization seems fit.

Previous Post <<||>> Next Post

Written: Sep-2020, Last Reviewed: Mar-2024

24 June 2020

Strategic Management: Simplicity I (Simple, but not that Simple)

Strategic Management
Strategic Management Series

Simplicity of design has been for centuries the wholly grail of architects, while software designers seem to situate themselves in opposition with the trend, as they aim using a mix of technologies that usually increase architecture’s complexity (sometimes the many, the newer and fancier, the better). Unfortunately, despite the implied but not necessarily reachable potential, each component added to an information system or infrastructure has the potential of increasing the overall complexity by a factor proportional to the degree of interactions it creates, respectively by the number of issues it creates or allows to propagate through these interactions.

Conversely, one talks about simplicity in IT without stating what is intended by it, and it can mean many things. Quite often the aim is packed within the ‘keep it simple stupid’ (aka KISS) mantra, a modern and pejorative alternative of Occam’s razor. KISS became a principle in software architecture design, and it can mean that a simple solution works better than a complex one, or that pursuing something in the simplest manner possible is usually better. The nuances are wide enough to cover a wide spectrum of solutions, arriving at statements that the simplest choice to make is the most appropriate one to make, thing that’s not necessarily true in IT, where complexity finds itself home.

Starting with the important number of technologies coexisting in integrations and ending with the exceptions existing in processes or the quality of data, things are almost never as simple as one may wish. An IT infrastructure’s complexity is dependent on the number of existing components, on whether they come from different generations or come from different vendor, on whether are deployed on different operating systems or are supported by different service providers, on the number of customizations made, on the degree of overlapping of the data and integrations needed to keep the data in synch, respectively of the differences existing in data models, quality and use. In general, the more variance, randomness, and challenges one has, the higher the overall complexity.

Paraphrasing Saint Exupéry, in IT simplicity is reached when there is no longer anything to add or anything to take away, or in Hans Hofmann’s words, simplicity is reflected in ‘the ability to simplify means to eliminate the unnecessary so that the necessary may speak’. This refers to the features, what a piece of software can do, respectively the functionality, how a certain outcome is reached, which arrive to be packed in various logical aggregations (function point, functional requirement, story, epic, model, product, etc.) or physical aggregations (classes, components, packages, services, models, etc.). These are the levels at which one needs to address simplicity adequately.

To make something simple one must be able either to design a solution up to the detail that there’s nothing to add or remove, or to start with something and remove or things to reach simplicity. Both approaches involve a considerable effort, time, and multiple iterations, however the first approach can easily become utopian as some architectures are so complex that sooner or later the second approach comes into play. Therefore, one needs in general to focus on what seems an optimal solution and optimize it continuously in further iterations. Aiming for perfection from the beginning or also later in the improvement process is a foolhardy wish.

Even if simplicity is hard to achieve, one can still talk about the elegance of a solution, scenarios in which the various components fit together like the pieces of a puzzle, or about robustness, reliability, correctness, maintainability, (re)usability, or learnability. These latter characteristics are known in Software Engineering as (software) quality attributes.

19 May 2020

Data Migrations (DM): In-house Built Solutions I (Introduction)

Data Migration
Data Migrations Series

A Data Migration (DM) is the move of all or a subset of the data available from one or more system(s) into other system(s). For ERP systems a DM this can be achieved by having a database in between which allows the import and combination of data from various sources, the further processing of data, respectively the preparation of the data as per target system(s)’ needs. 

At a deeper lever is needed to model the entities from source and the target systems at level that allows easier processing, which resumes to removing and adding data, making mapping and transformations between the input and output models. In other words, a set of entities is transformed in another set of entities, while at a much lower level attributes from the source system must be mapped to the target system. 

SQL scripting provides enough flexibility in modelling the source or target system(s), in changing the data and logic on the fly, in building additional functionality, in reusing logic or cleaning the data. It works with big amounts of structured data, while for unstructured data one can still combine the power of SQL with ad-hoc or even NoSQL processing. It all sounds easy, isn’t it? Then from where comes the complexity people talk about?

Knowing SQL and the features of a (relational) database (the how) is only a small part from all is needed. Building a DM solution implies the knowledge of the source data models and of the data behind (the what), knowing the target (where), how and when to import the data, why the data must be made available, in what way the data has impact on the target system(s), respectively who needs to be involved. The more of these aspects are known, the higher the chances for the DM to succeed. 

When there’s a gap in the needed knowledge, then the knowledge must be acquired from the business and/or consultants, by investing time in data discovery or similar activities, in experimenting and testing. The more uncertainty, the more iterations are needed to achieve the degree of certainty (aka quality) needed. At minimum one needs at least 3-4 iterations to migrate the data – to build the concept, to test fully the migration, respectively to get the sign-off in a test environment and doing the migration into the production. 

Another type of complexity is derived from the interdependencies existing between the DM and vital activities like system parameterization and data cleaning. System’s parametrization defines how a system behaves, an important number of the parametrization values being reflected also in the prepared data for the DM. Wrong parametrization values can be trapped in the process during the various iterations, however poor data quality will reflect though the system after Go-Live and will haunt the business on the long term, unless addressed correctly into the project. 

All these aspects are reflected to some degree also in DM’s architecture (e.g. building restart points, including data validation and automatic cleaning) and process (e.g. sequencing of steps). From all the effort involved in DM, maybe 20-30% needs to be invested in building the solution. These 20-30% can be reduced in theory by using specialized DM tools, though the advantage provided by such tools can be illusory, especially when further limitations are involved. On the other side such tools may provide functionality that address the other 70-80%. 

As usual in IT, there’s a trade-off between advantages and disadvantages. An in-house build DM solution may provide the needed flexibility in the detriment of a small time investment. Whether it pays-off is something one has to prove by walking this path. 

03 January 2020

Data Management: Data Literacy (Part I: A Second Language)

Data Management

At the Gartner Data & Analytics Summit that took place in 2018 in Grapevine, Texas, it was reiterated the importance of data literacy for taking advantage of the emergence of data analytics, artificial intelligence (AI) and machine learning (ML) technologies. Gartner expected then that by 2020, 80% of organizations will initiate deliberate competency development in the field of data literacy [1] – or how they put it – learning to ‘speak data’ as a ‘second language’.

Data literacy is typically defined as the ability to read, work with, analyze, and argue with data. Sure, these form the blocks of data literacy, though what I’m missing from this definition is the ability to understand the data, even if understanding should be the outcome of reading, and the ability to put data into the context of business problems, even if the analyzes of data could involve this later aspect too.

Understanding has several aspects: understanding the data structures available within an organization, understanding the problems with data (including quality, governance, privacy and security), respectively understanding how the data are linked to the business processes. These aspects go beyond the simple ability included in the above definition, which from my perspective doesn’t include the particularities of an organization (data structure, data quality and processes) – the business component. This is reflected in one of the problems often met in the BI/data analytics industry – the solutions developed by the various service providers don’t reflect organizations’ needs, one of the causes being the inability to understand the business on segments or holistically.  

Putting data into context means being able to use the respective data in answering stringent business problems. A business problem needs to be first correctly defined and this requires a deep understanding of the business. Then one needs to identify the data that could help finding the answers to the problem, respectively of building one or more models that would allow elaborating further theories and performing further simulations. This is an ongoing process in which the models built are further enhanced, when possible, or replaced by better ones.

Probably the comparison with a second language is only partially true. One can learn a second language and argue in the respective language, though it doesn’t mean that the argumentations will be correct or constructive as long the person can’t do the same in the native language. Moreover, one can have such abilities in the native or a secondary language, but not be able do the same in what concerns the data, as different skillsets are involved. This aspect can make quite a difference in a business scenario. One must be able also to philosophize, think critically, as well to understand the forms of communication and their rules in respect to data.

To philosophize means being able to understand the causality and further relations existing within the business and think critically about them. Being able to communicate means more than being able to argue – it means being able to use effectively the communication tools – communication channels, as well the methods of representing data, information and knowledge. In extremis one might even go beyond the basic statistical tools, stepping thus in what statistical literacy is about. In fact, the difference between the two types of literacy became thinner, the difference residing in the accent put on their specific aspects.

These are the areas which probably many professionals lack. Data literacy should be the aim, however this takes time and is a continuous iterative process that can take years to reach maturity. It’s important for organizations to start addressing these aspects, progress in small increments and learn from the experience accumulated.

Previous Post <<||>> Next Post

References:
[1] Gartner (2018) How data and analytics leaders learn to master information as a second language, by Christy Pettey (link

07 May 2019

Business Intelligence: How Big Is Your Report?

Business Intelligence

How big are your reports? How big reports needs to be? Do your reports really reflect your needs? Have they become too cluttered with data? Do you have too many reports on the same topic? How many is too many? These are the few of the questions BI developers and users should ask themselves altogether from time to time.

A report is any document with textual and/or graphical formatted output of data from one or more data sources, (previously) designed to convey a basis for decision making or operational activities. A report is characterized by the amount of data it holds (the datasets), the amount of data is based on (the source data), the number and complexity of the queries on which the report is based, the number of data sources, the manner in which data are structured (tabular, matrix, graphical), the filtering and sorting possibilities, as well by the navigability possibilities (drilldown, drill-through, slice-and-dice, etc.). 

On the other side for users are important characteristics like reports’ performance, the amount of useful information it conveys, the degree to which a report helps address a business need, the quality of data, the degree to which it satisfies the various policies, the look and feel, the possibility of exporting the data to standard file formats.

A report’s size is defined typically by the product of columns and records the report displays plus the formatting and various types of graphical content, however this depends on the filter criteria used by the user. Usually is considered the average size of a report based on the typical filters used. Nowadays networks and database specific techniques allow displaying fairly big reports (20-50 Mb) in a fairly amount of time (10-20 seconds) without affecting network, respectively database’s performance, which for most of the requests should be enough. When the users need bigger volumes of data then a direct data dump (extract) from the database should be considered, when possible. (A data export is not a report and they should be differentiated as such.)

The number of records that could be shown in a report is dependent on reporting framework’s capabilities, e.g. there are reporting tools that cope well with showing a few thousands records but have difficulties in showing or exporting tens of thousands of records. The best example into this respect is Excel and its well-known limitation of 65536 records (2^16)  and 256 columns (2^8) that in the meantime has been addressed in Excel 2007 and enlarged to 1 million records (2^20), respectively 16k (2^14). Even so the reporting tools that use older drivers can fail exporting all the data to Excel when the former limitation is reached.

In general, reports with too many columns tend to obfuscate data’s understanding and are more difficult to navigate. The more the user needs to scroll horizontally the higher in general the obfuscation. If the users really need 50 columns then they should be provided, however in general 20-25 should be enough for an operational report. Tactical and strategic reports need a restrained focus and the information should be provided in a screen without the need of scrolling.

When reports get too big is recommended to split the reports in two or more reports to address specific requirements, however this can lead to too many distinct reports, and further to duplication of effort for creating and documenting them, and the duplication of logic and data. Therefore, the challenge is to find the right balance between the volume of reports, their usability and the effort needed to manage them. In certain scenarios it makes even sense to consolidate similar reports.

19 April 2019

Performance Management: The Need for Perfection vs. Excellence

Performance Management

A recurring theme occurring in various contexts over the years seemed to be corroborated with the need for perfection, need going sometimes in extremis beyond common sense. The simplest theory attempting to explain at least some of these situations is that people tend to confuse excellence with perfection, from this confusion deriving false beliefs, false expectations and unhealthy behavior. 

Beyond the fact that each individual has an illusory image of what perfection is about, perfection is in certain situations a limiting force rooted in the idealistic way of looking at life. Primarily, perfection denotes that we will never be good enough to reach it as we are striving to something that doesn’t exist. From this appears the external and internal criticism, criticism that instead of helping us to build something it drains out our energy to the extent that it destroys all we have built over the years with a considerable effort. Secondarily, on the long run, perfection has the tendency to steal our inner peace and balance, letting fear take over – the fear of not making mistakes, of losing the acceptance and trust of the others. It focuses on our faults, errors and failures instead of driving us to our goals. In extremis it relieves the worst in people, actors and spectators altogether. 

In its proximate semantics though at diametral side through its implications, excellence focuses on our goals, on the aspiration of aiming higher without implying a limit to it. It’s a shift of attention from failure to possibilities, on what matters, on reaching our potential, on acknowledging the long way covered. It allows us building upon former successes and failures. Excellence is what we need to aim at in personal and professional life. Will Durant explaining Aristotle said that: “We are what we repeatedly do. Excellence, then, is not an act, but a habit.” 

People who attempt giving 100% of their best to achieve a (positive) goal are to admire, however the proximity of 100% is only occasionally achievable, hopefully when needed the most. 100% is another illusory limit we force upon ourselves as it’s correlated to the degree of achievement, completeness or quality an artefact or result can ideally have. We rightly define quality as the degree to which something is fit for purpose. Again, a moving target that needs to be made explicit before we attempt to reach it otherwise quality envisions perfection rather than excellence and effort is wasted. 

Considering the volume of effort needed to achieve a goal, Pareto’s principles (aka the 80/20 rule) seems to explain the best its underlying forces. The rule states that roughly 80% of the effects come from 20% of the causes. A corollary is that we can achieve 80% of a goal with 20% of the effort needed altogether to achieve it fully. This means that to achieve the remaining 20% toward the goal we need to put four times more of the effort already spent. This rule seems to govern the elaboration of concepts, designs and other types of documents, and I suppose it can be easily extended to other activities like writing code, cleaning data, improving performance, etc. 

Given the complexity, urgency and dependencies of the tasks or goals before us probably it's beneficial sometimes to focus first on the 80% of their extent, so we can make progress, and focus on the remaining 20% if needed, when needed. This concurrent approach can allow us making progress faster in incremental steps. Also, in time, through excellence, we can bridge the gap between the two numbers as is needed less time and effort in the process.


Related Posts Plugin for WordPress, Blogger...

About Me

My photo
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.