Showing posts with label data architecture. Show all posts
Showing posts with label data architecture. Show all posts

28 March 2024

Data Management: Master Data Management [MDM] (Notes)

Disclaimer: This is work in progress intended to consolidate information from various sources. 
Last updated: 28-Mar-2024

Master Data Management (MDM)

  • {definition} the technologies, processes, policies, standards and guiding principles that enable the management of master data values to enable consistent, shared, contextual use across systems, of the most accurate, timely, and relevant version of truth about essential business entities [2],[3]
  • {goal} enable sharing of information assets across business domains and applications within an organization [4]
  • {goal} provide authoritative source of reconciled and quality-assessed master (and reference) data [4]
  • {goal} lower cost and complexity through use of standards, common data models, and integration patterns [4]
  • {driver} meeting organizational data requirements
  • {driver} improving data quality
  • {driver} reducing the costs for data integration
  • {driver} reducing risks 
  • {type} operational MDM 
    • involves solutions for managing transactional data in operational applications [1]
    • rely heavily on data integration technologies
  • {type} analytical MDM
    • involves solutions for managing analytical master data
    • centered on providing high quality dimensions with multiple hierarchies [1]
    • cannot influence operational systems
      • any data cleansing made within operational application isn’t recognized by transactional applications [1]
        • ⇒ inconsistencies to the main operational data [1]
      • transactional application knowledge isn’t available to the cleansing process
  • {type} enterprise MDM
    • involves solutions for managing both transactional and analytical master data 
      • manages all master data entities
      • deliver maximum business value
    • operational data cleansing
      • improves the operational efficiencies of the applications and the business processes that use the applications
    • cross-application data need
      • consolidation
      • standardization
      • cleansing
      • distribution
    • needs to support high volume of transactions
      • ⇒ master data must be contained in data models designed for OLTP
        • ⇐ ODS don’t fulfill this requirement 
  • {enabler} high-quality data
  • {enabler} data governance
  • {benefit} single source of truth
    • used to support both operational and analytical applications in a consistent manner [1]
  • {benefit} consistent reporting
    • reduces the inconsistencies experienced previously
    • influenced by complex transformations
  • {benefit} improved competitiveness
    • MDM reduces the complexity of integrating new data and systems into the organization
      • ⇒ increased flexibility and improves competitiveness
    • ability to react to new business opportunities quickly with limited resources
  • {benefit} improved risk management
    • more reliable and consistent data improves the business’s ability to manage enterprise risk [1]
  • {benefit} improved operational efficiency and reduced costs
    • helps identify business’ pain point
      • by developing a strategy for managing master data
  • {benefit} improved decision making
    • reducing data inconsistency diminishes organizational data mistrust and facilitates clearer (and faster) business decisions [1]
  • {benefit} more reliable spend analysis and planning
    • better data integration helps planners come up with better decisions
      • improves the ability to 
        • aggregate purchasing activities
        • coordinate competitive sourcing
        • be more predictable about future spending
        • generally improve vendor and supplier management
  • {benefit} regulatory compliance
    • allows to reduce compliance risk
      • helps satisfy governance, regulatory and compliance requirements
    • simplifies compliance auditing
      • enables more effective information controls that facilitate compliance with regulations
  • {benefit} increased information quality
    • enables organizations to monitor conformance more effectively
      • via metadata collection
      • it can track whether data meets information quality expectations across vertical applications, which reduces information scrap and rework
  • {benefit} quicker results
    • reduces the delays associated with extraction and transformation of data [1]
      • ⇒ it speeds up the implementation of application migrations, modernization projects, and data warehouse/data mart construction [1]
  • {benefit} improved business productivity
    • gives enterprise architects the chance to explore how effective the organization is in automating its business processes by exploiting the information asset [1]
      • ⇐ master data helps organizations realize how the same data entities are represented, manipulated, or exchanged across applications within the enterprise and how those objects relate to business process workflows [1]
  • {benefit} simplified application development
    • provides the opportunity to consolidate the application functionality associated with the data lifecycle [1]
      • ⇐ consolidation in MDM is not limited to the data
      • ⇒ provides a single functional to which different applications can subscribe
        • ⇐ introducing a technical service layer for data lifecycle functionality provides the type of abstraction needed for deploying SOA or similar architectures
  • factors to consider for implementing an MDM:
    • effective technical infrastructure for collaboration [1]
    • organizational preparedness
      • for making a quick transition from a loosely combined confederation of vertical silos to a more tightly coupled collaborative framework
      • {recommendation} evaluate the kinds of training sessions and individual incentives required to create a smooth transition [1]
    • metadata management
      • via a metadata registry 
        • {recommendation} sets up a mechanism for unifying a master data view when possible [1]
        • determines when that unification should be carried out [1]
    • technology integration
      • {recommendation} diagnose what technology needs to be integrated to support the process instead of developing the process around the technology [1]
    • anticipating/managing change
      • proper preparation and organization will subtly introduce change to the way people think and act as shown in any shift in pattern [1]
      • changes in reporting structures and needs are unavoidable
    • creating a partnership between Business and IT
      • IT roles
        • plays a major role in executing the MDM program[1]
      • business roles
        • identifying and standardizing master data [1]
        • facilitating change management within the MDM program [1]
        • establishing data ownership
    • measurably high data quality
    • overseeing processes via policies and procedures for data governance [1]
  • {challenge} establishing enterprise-wide data governance
    • {recommendation} define and distribute the policies and procedures governing the oversight of master data
      • seeking feedback from across the different application teams provides a chance to develop the stewardship framework agreed upon by the majority while preparing the organization for the transition [1]
  • {challenge} isolated islands of information
    • caused by vertical alignment of IT
      • makes it difficult to fix the dissimilarities in roles and responsibilities in relation to the isolated data sets because they are integrated into a master view [1]
    • caused by data ownership
      • the politics of information ownership and management have created artificial exclusive domains supervised by individuals who have no desire to centralize information [1]
  • {challenge} consolidating master data into a centrally managed data asset [1]
    • transfers the responsibility and accountability for information management from the lines of business to the organization [1]
  • {challenge} managing MDM
    • MDM should be considered a program and not a project or an application [1]
  • {challenge} achieving timely and accurate synchronization across disparate systems [1]
  • {challenge} different definitions of master metadata 
    • different coding schemes, data types, collations, and more
      • ⇐ data definitions must be unified
  • {challenge} data conflicts 
    • {recommendation} resolve data conflicts during the project [5]
    • {recommendation} replicate the resolved data issues back to the source systems [5]
  • {challenge} domain knowledge 
    • {recommendation} involve domain experts in an MDM project [5]
  • {challenge} documentation
    • {recommendation} properly document your master data and metadata [5]
  • approaches
    • {architecture} no central MDM 
      • isn’t a real MDM approach
      • used when any kind of cross-system interaction is required [5]
        • e.g. performing analysis on data from multiple systems, ad-hoc merging and cleansing
      • {drawback} very inexpensive at the beginning; however, it turns out to be the most expensive over time [5]
    • {architecture} central metadata storage 
      • provides unified, centrally maintained definitions for master data [5]
        • followed and implemented by all systems
      • ad-hoc merging and cleansing becomes somewhat simpler [5]
      • does not use a specialized solution for the central metadata storage [5]
        • ⇐ the central storage of metadata is probably in an unstructured form 
          • e.g. documents, worksheets, paper
    • {architecture} central metadata storage with identity mapping 
      • stores keys that map tables in the MDM solution
        • only has keys from the systems in the MDM database; it does not have any other attributes [5]
      • {benefit} data integration applications can be developed much more quickly and easily [5]
      • {drawback} raises problems in regard to maintaining master data over time [5]
        • there is no versioning or auditing in place to follow the changes [5]
          • ⇒ viable for a limited time only
            • e.g. during upgrading, testing, and the initial usage of a new ERP system to provide mapping back to the old ERP system
    • {architecture} central metadata storage and central data that is continuously merged 
      • stores metadata as well as master data in a dedicated MDM system
      • master data is not inserted or updated in the MDM system [5]
      • the merging (and cleansing) of master data from source systems occurs continuously, regularly [5]
      • {drawback} continuous merging can become expensive [5]
      • the only viable use for this approach is for finding out what has changed in source systems from the last merge [5]
        • enables merging only the delta (new and updated data)
      • frequently used for analytical systems
    • {architecture} central MDM, single copy 
      • involves a specialized MDM application
        • master data, together with its metadata, is maintained in a central location [5]
        • ⇒ all existing applications are consumers of the master data
      • {drawback} upgrade all existing applications to consume master data from central storage instead of maintaining their own copies [5]
        • ⇒ can be expensive
        • ⇒ can be impossible (e.g. for older systems)
      • {drawback} needs to consolidate all metadata from all source systems [5]
      • {drawback} the process of creating and updating master data could simply be too slow [5]
        • because of the processes in place
    • {architecture} central MDM, multiple copies 
      • uses central storage of master data and its metadata
        • ⇐ the metadata here includes only an intersection of common metadata from source systems [5]
        • each source system maintains its own copy of master data, with additional attributes that pertain to that system only [5]
      • after master data is inserted into the central MDM system, it is replicated (preferably automatically) to source systems, where the source-specific attributes are updated [5]
      • {benefit} good compromise between cost, data quality, and the effectiveness of the CRUD process [5]
      • {drawback} update conflicts
        • different systems can also update the common data [5]
          • ⇒ involves continuous merges as well [5]
      • {drawback} uses a special MDM application
Acronyms:
MDM - Master Data Management
ODS - Operational Data Store
OLAP - online analytical processing
OLTP - online transactional processing
SOA - Service Oriented Architecture

References:
[1] The Art of Service (2017) Master Data Management Course 
[2] DAMA International (2009) "The DAMA Guide to the Data Management Body of Knowledge" 1st Ed.
[3] Tony Fisher 2009 "The Data Asset"
[4] DAMA International (2017) "The DAMA Guide to the Data Management Body of Knowledge" 2nd Ed.
[5] Dejan Sarka et al (2012) Exam 70-463: Implementing a Data Warehouse with Microsoft SQL Server 2012 (Training Kit)

20 March 2024

Data Management: Master Data Management (Part I: Understanding Integration Challenges) [Answer]

Data Management
Data Management Series

Answering Piethein Strengholt’s post [1] on Master Data Management’s (MDM) integration challenges, the author of "Data Management at Scale".

Master data can be managed within individual domains though the boundaries must be clearly defined, and some coordination is needed. Attempting to partition the entities based on domains doesn’t always work. The partition needs to be performed at attribute level, though even then might be some exceptions involved (e.g. some Products are only for Finance to use). One can identify then attributes inside of the system to create the boundaries.

MDM is simple if you have the right systems, processes, procedures, roles, and data culture in place. Unfortunately, people make it too complicated – oh, we need a nice shiny system for managing the data before they are entered in ERP or other systems, we need a system for storing and maintaining the metadata, and another system for managing the policies, and the story goes on. The lack of systems is given as reason why people make no progress. Moreover, people will want to integrate the systems, increasing the overall complexity of the ecosystem.

The data should be cleaned in the source systems and assessed against the same. If that's not possible, then you have the wrong system! A set of well-built reports can make data assessment possible. 

The metadata and policies can be maintained in Excel (and stored in SharePoint), SharePoint or a similar system that supports versioning. Also, for other topics can be found pragmatic solutions.

ERP systems allow us to define workflows and enable a master data record to be published only when the information is complete, though there will always be exceptions (e.g., a Purchase Order must be sent today). Such exceptions make people circumvent the MDM systems with all the issues deriving from this.

Adding an MDM system within an architecture tends to increase the complexity of the overall infrastructure and create more bottlenecks. Occasionally, it just replicates the structures existing in the target system(s).

Integrations are supposed to reduce the effort, though in the past 20 years I never saw an integration to work without issues, even in what MDM concerns. One of the main issues is that the solutions just synchronized the data without considering the processual dependencies, and sometimes also the referential dependencies. The time needed for troubleshooting the integrations can easily exceed the time for importing the data manually over an upload mechanism.

To make the integration work the MDM will arrive to duplicate the all the validation available in the target system(s). This can make sense when daily or weekly a considerable volume of master data is created. Native connectors simplify the integrations, especially when it can handle the errors transparently and allow to modify the records manually, though the issues start as soon the target system is extended with more attributes or other structures.

If an organization has an MDM system, then all the master data should come from the MDM. As soon as a bidirectional synchronization is used (and other integrations might require this), Pandora’s box is open. One can define hard rules, though again, there are always exceptions in which manual interference is needed.

Attempting an integration of reference data is not recommended. ERP systems can have hundreds of such entities. Some organizations tend to have a golden system (a copy of production) with all the reference data. It works for some time, until people realize that the solution is expensive and time-consuming.

MDM systems do make sense in certain scenarios, though to get the integrations right can involve a considerable effort and certain assumptions and requirements must be met.

Previous Post <<||>> Next Post

References:
[1] Piethein Strengholt (2023) Understanding Master Data Management’s Integration Challenges (link)


13 March 2024

Book Review: Zhamak Dehghani's Data Mesh: Delivering Data-Driven Value at Scale (2021)

Zhamak Dehghani's "Data Mesh: Delivering Data-Driven Value at Scale" (2021)

Zhamak Dehghani's "Data Mesh: Delivering Data-Driven Value at Scale" (2021) is a must read book for the data professional. So, here I am, finally managing to read it and give it some thought, even if it will probably take more time and a few more reads for the ideas to grow. Working in the fields of Business Intelligence and Software Engineering for almost a quarter-century, I think I can understand the historical background and the direction of the ideas presented in the book. There are many good ideas but also formulations that make me circumspect about the applicability of some assumptions and requirements considered. 

So, after data marts, warehouses, lakes and lakehouses, the data mesh paradigm seems to be the new shiny thing that will bring organizations beyond the inflection point with tipping potential from where organization's growth will have an exponential effect. At least this seems to be the first impression when reading the first chapters. 

The book follows to some degree the advocative tone of promoting that "our shiny thing is much better than previous thing", or "how bad the previous architectures or paradigms were and how good the new ones are" (see [2]). Architectures and paradigms evolve with the available technologies and our perception of what is important for businesses. Old and new have their place in the order of things, and the old will continue to exist, at least until the new proves its feasibility.  

The definition of the data mash as "a sociotechnical approach to share, access and manage analytical data in complex and large-scale environments - within or across organizations" [1] is too abstract even if it reflects at high level what the concept is about. Compared to other material I read on the topic, the book succeeds in explaining the related concepts as well the goals (called definitions) and benefits (called motivations) associated with the principles behind the data mesh, making the book approachable also by non-professionals. 

Built around four principles "data as a product", "domain-oriented ownership", "self-serve data platform" and "federated governance", the data mesh is the paradigm on which data as products are developed; where the products are "the smallest unit of architecture that can be independently deployed and managed", providing by design the information necessary to be discovered, understood, debugged, and audited.

It's possible to create Lego-like data products, data contracts and/or manifests that address product's usability characteristics, though unless the latter are generated automatically, put in the context of ERP and other complex systems, everything becomes quite an endeavor that requires time and adequate testing, increasing the overall timeframe until a data product becomes available. 

The data mesh describes data products in terms of microservices that structure architectures in terms of a collection of services that are independently deployable and loosely coupled. Asking from data products to behave in this way is probably too hard a constraint, given the complexity and interdependency of the data models behind business processes and their needs. Does all the effort make sense? Is this the "agility" the data mesh solutions are looking for?

Many pioneering organizations are still fighting with the concept of data mesh as it proves to be challenging to implement. At a high level everything makes sense, but the way data products are expected to function makes the concept challenging to implement to the full extent. Moreover, as occasionally implied, the data mesh is about scaling data analytics solutions with the size and complexity of organizations. The effort makes sense when the organizations have a certain size and the departments have a certain autonomy, therefore, it might not apply to small to medium businesses.

Previous Post <<||>> Next Post

References:
[1] Zhamak Dehghani (2021) "Data Mesh: Delivering Data-Driven Value at Scale" (link)
[2] SQL-troubles (2024) Zhamak Dehghani's Data Mesh - Monolithic Warehouses and Lakes (link)

04 March 2024

Business Intelligence: Microsoft Fabric's Domains and the Data Mesh I (The Challenge of Structure Matching)

Business Intelligence Series
Business Intelligence Series

The holy grail of building a Data Analytics infrastructure seems to be nowadays the creation of a data mesh, a decentralized data architecture that organizes data by specific business domains. This endeavor proves to be difficult to achieve given the various challenges faced  – data integration, data ownership, data product creation and ownership, enablement of data citizens, respectively enforcing security and governance in a federated manner. 

Microsoft Fabric promises to facilitate the creation of data mashes with the help of domains and subdomain by providing built-in security, administration, and governance features associated with them. A domain is a way of logically grouping together all the data in an organization that is relevant to a particular area or field. A subdomain is a way for fine tuning the logical grouping of the data.

Business domains
Business domains & their entities

At high level the challenge of building a data mesh is on how to match or aggregate structures. On one side is the high-level structure of the data mesh, while on the other side is the structure of the business data entities. The data entities can be grouped within a taxonomy with multiple levels that expands to the departments. That’s why it seems somehow natural to consider the departments as the top-most domains of the data mesh. The issue is that if the segmentation starts from a high level, iI becomes inflexible in modeling. Moreover, one has only domains and subdomains, and thus a 2-level structure to model the main aspects of the data mesh.

Some organizations allow unrestricted access to the data belonging to a given department, while others breakdown the access to a more granular level. There are also organizations that don’t restrict the access at all, though this may change later. Besides permissions and a way of grouping together the entities, what value brings to set the domains as departments? 

Therefore, I’m not convinced about using an organizations’ departmental structure as domains, especially when such a structure may change and this would imply a full range of further changes. Moreover, such a structure doesn’t reflect the span of processes or how permissions are assigned for the various roles, which are better reflected on how information systems are structured. Most probably the solution needs to accommodate both perspective and be somehow in the middle. 

Take for example the internal structure of the modules from Dynamics 365 (D365). The Finance area is broken down in Accounts Payable, Accounts Receivables, Fixed Assets, General Ledger, etc. In some organizations the departments reflect this delimitation to some degree, while in others are just associated with finance-related roles. Moreover, the permissions are more granular and, reflecting the data entities the users work with. 

Conversely, SCM extends into Finance as Purchase orders, Sales orders and other business documents are the starting or intermediary points of processes that span modules. Similarly, there are processes that start in CRM or other systems. The span of processes seem to be more appropriate for structuring the data mesh, though the system overlapping with the roles involved in the processes and the free definition of process boundaries can overcomplicate the whole design.

It makes sense to define the domains at a level that resembles the structure of the modules available in D365, while the macro data-entities represent the subdomain. The subdomain would represent then master as well as transactional data entities from the perspective of the domains, with there will be entities that need to be shared between multiple domains. Such a structure has less chances to change over time, allowing more flexibility and smaller areas of focus and thus easier to design, develop, test, deploy and maintain.

Previous Post <<||>> Next Post

13 February 2024

Business Intelligence: A One Man Show V (Focus on the Foundation)

Business Intelligence Suite
Business Intelligence Suite

I tend to agree that one person can't do anymore "everything in the data space", as Christopher Laubenthal put it his article on the topic [1]. He seems to catch the essence of some of the core data roles found in organizations. Summarizing these roles, data architecture is about designing and building a data infrastructure, data engineering is about moving data, database administration is mainly about managing databases, data analysis is about assisting the business with data and reports, information design is about telling stories, while data science can be about studying the impact of various components on the data. 

However, I find his analogy between a college's functional structure and the core data roles as poorly chosen from multiple perspectives, even if both are about building an infrastructure of some type. 

Firstly, the two constructions have different foundations. Data exists in a an organization also without data architects, data engineers or data administrators (DBAs)! It's enough to buy one or more information systems functioning as islands and reporting needs will arise. The need for a data architect might come when the systems need to be integrated or maybe when a data warehouse needs to be build, though many organizations are still in business without such constructs. While for the others, the more complex the integrations, the bigger the need for a Data Architect. Conversely, some systems can be integrated by design and such capabilities might drive their selection.

Data engineering is needed mainly in the context of the cloud, respectively of data lake-based architectures, where data needs to be moved, processed and prepared for consumption. Conversely, architectures like Microsoft Fabric minimize data movement, the focus being on data processing, the successive transformations it needs to suffer in moving from bronze to the gold layer, respectively in creating an organizational semantical data model. The complexity of the data processing is dependent on data' structuredness, quality and other data characteristics. 

As I mentioned before, modern databases, including the ones in the cloud, reduce the need for DBAs to a considerable degree. Unless the volume of work is big enough to consider a DBA role as an in-house resource, organizations will more likely consider involving a service provider and a contingent to cover the needs. 

Having in-house one or more people acting under the Data Analyst role, people who know and understand the business, respectively the data tools used in the process, can go a long way. Moreover, it's helpful to have an evangelist-like resource in house, a person who is able to raise awareness and knowhow, help diffuse knowledge about tools, techniques, data, results, best practices, respectively act as a mentor for the Data Analyst citizens. From my point of view, these are the people who form the data-related backbone (foundation) of an organization and this is the minimum of what an organization should have!

Once this established, one can build data warehouses, data integrations and other support architectures, respectively think about BI and Data strategy, Data Governance, etc. Of course, having a Chief Data Officer and a Data Strategy in place can bring more structure in handling the topics at the various levels - strategical, tactical, respectively operational. In constructions one starts with a blueprint and a data strategy can have the same effect, if one knows how to write it and implement it accordingly. However, the strategy is just a tool, while the data-knowledgeable workers are the foundation on which organizations should build upon!

"Build it and they will come" philosophy can work as well, though without knowledgeable and inquisitive people the philosophy has high chances to fail.

Previous Post <<||>> Next Post

Resources:
[1] Christopher Laubenthal (2024) "Why One Person Can’t Do Everything In Data" (link)

Business Intelligence: A One Man Show IV (Data Roles between Past and Future)

Business Intelligence Series
Business Intelligence Series

Databases nowadays are highly secure, reliable and available to a degree that reduces the involvement of DBAs to a minimum. The more databases and servers are available in an organization, and the older they are, the bigger the need for dedicated resources to manage them. The number of DBAs involved tends to be proportional with the volume of work required by the database infrastructure. However, if the infrastructure is in the cloud, managed by the cloud providers, it's enough to have a person in the middle who manages the communication between cloud provider(s) and the organization. The person doesn't even need to be a DBA, even if some knowledge in the field is usually recommended.

The requirement for a Data Architect comes when there are several systems in place and there're multiple projects to integrate or build around the respective systems. It'a also the question of what drives the respective requirement - is it the knowledge of data architectures, the supervision of changes, and/or the review of technical documents? The requirement is thus driven by the projects in progress and those waiting in the pipeline. Conversely, if all the systems are in the cloud, their integration is standardized or doesn't involve much architectural knowledge, the role becomes obsolete or at least not mandatory. 

The Data Engineer role is a bit more challenging to define because it appeared in the context of cloud-based data architectures. It seems to be related to the data movement via ETL/ELT pipelines and of data processing and preparation for the various needs. Data modeling or data presentation knowledge isn't mandatory even if ideal. The role seems to overlap with the one of a Data Warehouse professional, be it a simple architect or developer. Role's knowhow depends also on the tools involved, because one thing is to build a solution based on a standard SQL Server, and another thing to use dedicated layers and architectures for the various purposes. Engineers' number should be proportional with the number of data entities involved.

Conversely, the existence of solutions that move and process the data as needed, can reduce the volume of work. Moreover, the use of AI-driven tools like Copilot might shift the focus from data to prompt engineering. 

The Data Analyst role is kind of a Cinderella - it can involve upon case everything from requirements elicitation to reports writing and results' interpretation, respectively from data collection and data modeling to data visualization. If you have a special wish related to your data, just add it to the role! Analysts' number should be related to the number of issues existing in organization where the collection and processing of data could make a difference. Conversely, the Data Citizen, even if it's not a role but a desirable state of art, could absorb in theory the Data Analyst role.

The Data Scientist is supposed to reveal the gems of knowledge hidden in the data by using Machine Learning, Statistics and other magical tools. The more data available, the higher the chances of finding something, even if probably statistically insignificant or incorrect. The role makes sense mainly in the context of big data, even if some opportunities might be available at smaller scales. Scientists' number depends on the number of projects focused on the big questions. Again, one talks about the Data Scientist citizen. 

The Information Designer role seems to be more about data visualization and presentation. It makes sense in the organizations that rely heavily on visual content. All the other organizations can rely on the default settings of data visualization tools, independently on whether AI is involved or not. 

Previous Post <<||>> Next Post

03 March 2023

Data Warehousing: Building a Modern Data Warehouse with Azure Synapse

Data Warehousing

Introduction

When building a data warehouse (DWH) several key words or derivatives of them appear in requirements: secure, flexible, simple, scalable, reliable, performant, non-redundant, modern, automated, real-timed, etc. As it proves in practice, all these requirements are sometimes challenging to address with the increased complexity of the architecture chosen. There are so many technologies on the DWH market promising all these at low costs, low effort and high ROI, though DWH projects continue to fail addressing the business and technical requirements.

On a basic level for building a DWH is needed a data storage layer and an ETL (Extract, Transfer, Load) tool responsible for the data movement between the various source systems and DWH, and eventually within the DWH itself. After that, each technology added to the landscape tends to increase the overall complexity (and should be regarded with a critical eye in what concerns the advantages and disadvantages).

Data Warehouse Architecture (on-premise)

A Reference Architecture

When building a DWH or a data migration solution, which has many of the characteristics of a DWH, from the many designs, I prefer to keep things as simple as possible.  An approach based on a performant database engine like SQL Server as storage layer and SSIS (SQL Server Integration Services) as ETL proved to be the best choice until now, allowing to address most of the technical requirements by design. Then come the choices on how and where to import and transform the data, at what level of granularity, on how the semantic layer is built, how the data are accessed, etc.

Being able to pull (see extract subprocess) the data from the data sources on a need by basis offers the most flexible approach, however there are cases in which the direct access to source data is not possible, having to rely on a push approach, where data are dumped regularly to a given location (e.g. FTP folder structure), following to be picked up as needed. It's actually a hybrid between a push and pull, because a fully push approach would mean pushing the data directly to the DWH, which can be also acceptable, though might offer lower control on data's movement and involve a few other challenges (e.g. permissions, concurrency). 

Data can be prepared for the DWH in the source systems (e.g. exposed via data objects or API calls), anywhere in between via ETL-based transformations (see transform subprocess) or directly in the DWH. I prefer importing the data (see load subprocess) 1:1 without any transformations from the various sources via SSIS (or similar technologies) into a set of tables that designated the staging area. It's true that in this way the ETL technology is used to a minimum, though unless there's a major benefit to use it for data transformations, using DWH's capabilities and SQL for data processing can provide better performance and flexibility

Besides the selection of the columns in scope (typically columns with meaningful values), it's important not to do any transformations in the extraction layer because the data is imported faster (eventually using fast load options as in SSIS) and it assures a basis for troubleshooting (as the data don't change between loads). Some filters can be applied only when the volume of data is high, and the subset of the data could be identified clearly (e.g. when data are partitioned based on a key like business unit, legal entity or creation date).

For better traceability, the staging schemas can reflect the systems they come from, the tables and the columns should have the same names, respectively same data types. On such tables no constraints are applied and no indexes are needed. They can be constructed however on the production tables (aka base tables) - copy of the tables from production. 

Some DWH architects try replicating the constraints from the source systems and/or add more constraints on top to define the various business rules. Rigor is good in some scenarios, though it can involve a considerable effort and it might be challenging to keep over time, especially when considering the impact of big data on DWH architectures. Instead of using constraints, building a set of SQL scripts that pinpoint the issues as reports allow more flexibility with the risk of having inconsistencies running wild through the reports. The data should be cleaned in the source system and not possible then properly addressed in the DWH. Applying constraints will make the data unavailable for reporting until data are corrected, while being more permissive would allow dirty data. Thus, either case has advantages or disadvantages, though the latter seems to be more appropriate. 

Indexes on the production schema should reflect the characteristics of the queries run on the data and shouldn't replicate the indexes from the source environments, even if some overlaps might exist. In practice, dropping the non-clustered indexes on the production tables before loading the data from staging, and recreating them afterwards proves to provide faster loading (see load optimization techniques). 

The production tables are used for building a "semantic" data model or something similar. Several levels of views, table-valued functions and/or indexed/materialized views allows building the dimensions and facts tables, the latter incorporating the business logic needed by the reports. Upon case, stored-procedures, physical or temporary tables, table variables can be used to prepare the data, though they tend to break the "free" flow of data as steps in-between need to be run. On the other side, in certain scenarios their use is unavoidable. 

The first level of views (aka base views) is based on the base tables without any joins, though they include only the fields in use (needed by the business) ordered and "grouped" together based on their importance or certain characteristics. The views can include conversions of data types, translations of codes into meaningful values, and quite seldom filters on the data. Based on these "base" views the second level is built, which attempts to define the dimension and fact tables at the lowest granularity. These views include joins between tables coming from the same or different systems, respectively mappings of values defined in tables, and whatever it takes to build such entities. However, transformations on individual fields are pushed, when possible, to the lower level to minimize logic redundancy. From similar reasons, the logic could be broken down over two or more "helper" views when visible benefits could be obtained from it (e.g troubleshooting, reuse, maintenance). It's important to balance between creating too many helper views and encapsulating too much logic in a view. 

One of the design principles used in building the entities is to minimize the redundance of the fields used, ideally without having columns duplicated between entities at this level. This would facilitate the traceability of columns to the source tables within the "semantic" layer (typically in the detriment of a few more joins). In practice, one is forced to replicate some columns to simplify some parts of the logic. 

Further views can be built based on the dimension and fact entities to define the logic needed by the reports. Only these objects are used and no direct reference to the "base" tables or views are made. Moreover, to offer better performance when the views can be materialized or, when there's an important benefit, physically saved as table (e.g. having multiple indexes for different scenarios). It's the case of entities with considerable data volume called over and over. 

This approach of building the entities is usually flexible enough to address most of the reporting requirements, independently whether the technical solution has the characteristics of a DWH, data mart or data migration layer. Moreover, the overall architectural approach can be used on-premise as well in cloud architectures, where Azure SQL Server and ADF (Azure Data Factory) provide similar capabilities. Compared with standard SQL Server, some features might not be available, while other features might bring further benefits, though the gaps should be neglectable.

Data Management topics like Master Data Management (MDM), Data Quality Management (DQM) and/or Metadata Management can be addressed as well by using third-party tools or tools from the Microsoft stack - Master Data Services (MDS) and Data Quality Services (DQS) in combination with SSIS help addressing a wide range of scenarios - however these are optional. 

Moving to the Cloud

Within the context of big data, characterized by (high/variable) volume, value, variety, velocity, veracity, and further less important V's, the before technical requirements still apply, however within a cloud environment the overall architecture becomes more complex. Each component becomes a service. There are thus various services for data ingestion, storage, processing, sharing, collaboration, etc. The way data are processed involves also several important transformations: ETL becomes ELT, FTP and local storage by Data Lakes, data packages by data pipelines, stateful by stateless, SMP (Symmetric Multi-Processing) by MPP (Massive Parallel Processing), and so on.

As file storage is less expensive than database storage, there's an increasing trend of dumping business critical data into the Data Lake via data pipelines or features like Link to Data Lake or Export to Data Lake (*), which synchronize the data between source systems and Data Lake in near real-time at table or entity level. Either saved as csv, parquet, delta lake or any other standard file format, in single files or partitions, the data can be used directly or indirectly for analytics.

Cloud-native warehouses allow addressing topics like scalability, elasticity, fault-tolerance and performance by design, though further challenges appear as compute needs to be decoupled from storage, the workloads need to be estimated for assuring the performance, data may be distributed across data centers spanning geographies, the infrastructure is exposed to attacks, etc. 

Azure Synapse

If one wants to take advantage of the MPP architecture's power, Microsoft provides an analytical architecture based on Azure Synapse, an analytics service that brings together data integration, enterprise DWH, and big data analytics. Besides two types of SQL-based data processing services  (dedicated vs serverless SQL pools) it comes also with a Spark pool for in-memory cluster computing.

A DWH based on Azure Synapse is not that different from the reference architecture described above for an on-premise solution. Actually, a DWH based on a dedicated SQL pool (aka a physical data warehouse) involves the same steps mentioned above. 

Data Warehouse Architecture with Dedicated SQL Pool

The data can be imported via ETL/ELT pipelines in the DWH, though there are also mechanisms for consuming the data directly from the files stored in the Data Lake or Azure storage. CETAS (aka Create External Table as Select) can be defined on top of the data files, the external tables acting as "staging" or "base" tables in the architecture described above. When using a dedicated SQL pool it makes sense to use the CETAS as "staging" tables, the processed data following to be dumped to "optimized" physical tables for consumption and refreshed periodically. However, when this happens the near real-time character of data is lost. Using the CETAs as base tables would keep this characteristic as long the data isn't saved physically in tables or files, maybe in the detriment of performance.

Using a dedicated SQL pool for direct reporting can become expensive as the pool needs to be available at least during business hours for incoming user requests, or at least for importing the data and refreshing the datasets. When using the CETAS as a base table, a serverless (aka on-demand) SQL pool, which uses a per-pay-use billing model could prove to be more cost-effective and flexible in many scenarios. By design, it helps to keep the near real-time character of the data. Moreover, even if the data are actually moved from the source tables into the Data Lake, this architecture has the characteristics of a logical data warehouse:

Data Warehouse Architecture with Serverless SQL Pool

Unfortunately, unless one uses Spark tables, misuses views or adds an Azure SQL database to the architecture, there are no physical tables or materialized views in a serverless SQL pool. There's still the option to use data pipelines for regullarly exporting intermediary data to files (incl. over partitions or folders), even if this involves more overhead as it's not possible to export data over SQL syntax to files more than once (though this might change in the future). For certain scenario it could be useful to store data in a Azure SQL Server or similar database, including a dedicated SQL pool. 

Choosing between serverless and dedicated SQL pool is not an exclusive choice, both or all 3 types of pools (if we consider also the Spark pool) can be used in the architecture for addressing specific challenges, especially when we consider that there are important differences between the features available in each of the pools. Moreover, one can start the PoC based on the serverless SQL pool and when the solution became mature enough and used in all enterprise, parts of the logic or all of it can be migrated to a dedicated SQL pool. This would allow to save costs at the beginning in the detriment of further effort later. 

Talking about the physical storage, data engineers recommend defining within a Data Lake several layers (aka regions, zones) labeled as bronze, silver and gold (and probably platinum will join the club anytime soon). The bronze layer refers to the raw data available in the Data Lake, including the files on which the initial CETAS are defined upon. The silver refers to transformed, cleaned, enriched and integrated data, data resulting from the second layer of views described above. The gold layer refers to the data to which business logic was applied and prepared for consumption, data resulting from the final layer of views. Of course, data pipelines can be used to prepare the data at these stages, though a view-based approach offers more flexibility, are easier to troubleshoot, manage and reuse than data pipelines.

Ideally the gold data should involve no or minimal further transformation before reaching the users, though that's not realistic. Building a DWH takes a considerable time and the business can't usually wait until everything is in place. Therefore, reports based on DWH will continue to coexist with reports directly accessing the source data, which will lead to controversies. Enforcing a single source of truth will help to minimize the gap, though will not eliminate it completely. 

Closing Notes

These are just outlines of a minimal reference architecture. There's more to consider, as there are several alternatives (see [1] [2] [3] [4]) for each of the steps considered in here, each technology, new features or mechanisms opening new opportunities. The advantages and disadvantages should be always considered against the business needs and requirements. One approach, even if recommended, might not work for all, though unless there's an important requirement or an opportunity associated with an additional technology, deviating from reference architectures might not be such a good idea afterall.

Note:
(*) Existing customers have until 1-Nov-2024 to transition from Export to Data lake to Synapse link. Microsoft advises new customers to use Synapse Link. 


Resources:
[1] Microsoft Learn (2022) Modern data warehouse for small and medium business (link)
[2] Microsoft Learn (2022) Data warehousing and analytics (link)
[3] Microsoft Learn (2022) Enterprise business intelligence (link)
[4] Microsoft Learn (2022) Serverless Modern Data Warehouse Sample using Azure Synapse Analytics and Power BI (link)
[5] Coursera (2023) Data Warehousing with Microsoft Azure Synapse Analytics (link) [course, free to audit]
[6] SQLBits (2020) Mahesh Balija's Building Modern Data Warehouse with Azure Synapse Analytics (link)
[7] Matt How (2020) The Modern Data Warehouse in Azure: Building with Speed and Agility on Microsoft’s Cloud Platform (Amazon)
[8] James Serra's blog (2022) Data lake architecture (link)
[9] SQL Stijn (2022) SQL Building a Modern Lakehouse Data Warehouse with Azure Synapse Analytics: Moving your Database to the lake (link)
[10] Solliance (2022) Azure Synapse Analytics Workshop 400 (link) [GitHub repository]

04 May 2019

Data Warehousing: Push vs. Pull

Data Warehousing

In data integrations, data migrations and data warehousing there is the need to move data between two or more systems. In the simplest scenario there are only two systems involved, a source and a target system, though there can be complex scenarios in which data from multiple sources need to be available in a common target system (as in the case of data warehouses/marts or data migrations), or data from one source (e.g. ERP systems) need to be available in other systems (e.g. Web shops, planning systems), or there can be complex cases in which there is a many-to-many relationship (e.g. data from two ERP systems are consolidated in other systems).  

The data can flow in one direction from the source systems to the target systems (aka unidirectional flow), though there can be situations in which once the data are modified in the target system they need to flow back to the source system (aka bidirectional flow), as in the case of planning or product development systems. In complex scenarios the communication may occur multiple times within same process until a final state is reached.

Independently of the number of systems and the type of communication involved, data need to flow between the systems as smooth as possible, assuring that the data are consistent between the various systems and available when needed. The architectures responsible for moving data between the sources are based on two simple mechanisms - push vs pull – or combinations of them.

A push mechanism makes data to be pushed from the source system into the target system(s), the source system being responsible for the operation. Typically the push can happen as soon as an event occurs in the source system, event that leads to or follows a change in the data. There can be also cases when is preferred to push the data at regular points in time (e.g. hourly, daily), especially when the changes aren’t needed immediately. This later scenario allows to still make changes to the data in the source until they are sent to other system(s). When the ability to make changes is critical this can be controlled over specific business rules.

A pull mechanism makes the data to be pulled from the source system into the target system, the target systems being responsible for the operation. This usually happens at regular points in time or on demand, however the target system has to check whether the data have been changed.

Hybrid scenarios may involve a middleware that sits between the systems, being responsible for pulling the data from the source systems and pushing them into the targets system. Another hybrid scenario is when the source system pushes the data to an intermediary repository, the target system(s) pulling the data on a need basis. The repository can reside on the source, target on in-between. A variation of it is when the source informs the target that a change happened at it’s up to the target to decide whether it needs the data or not.

The main differentiators between the various methods is the timeliness, completeness and consistency of the data. Timeliness refers to the urgency with which data need to be available in the target system(s), completeness refers to the degree to which the data are ready to be sent, while consistency refers to the degree the data from the source are consistent with the data from the target systems.

Based on their characteristics integrations seem to favor push methods while data migrations and data warehousing the pull methods, though which method suits the best depends entirely on the business needs under consideration.

01 December 2017

Data Management: Data Architecture (Just the Quotes)

"The data architecture is the most important technical aspect of your business intelligence initiative. Fail to build an information architecture that is flexible, with consistent, timely, quality data, and your BI initiative will fail. Business users will not trust the information, no matter how powerful and pretty the BI tools. However, sometimes it takes displaying that messy data to get business users to understand the importance of data quality and to take ownership of a problem that extends beyond business intelligence, to the source systems and to the organizational structures that govern a company’s data." (Cindi Howson, "Successful Business Intelligence: Secrets to making BI a killer App", 2008)

"Data architecture allows strategic development of flexible modular designs by insulating the data from the business as well as the technology process." (Charles D Tupper, "Data Architecture: From Zen to Reality", 2011)

"Data architectures are the heart of business functionality. Given the proper data architecture, all possible functions can be completed within the enterprise easily and expeditiously." (Charles D Tupper, "Data Architecture: From Zen to Reality", 2011)

"The enterprise architecture delineates the data according to the inherent structure within the organization rather than by organizational function or use. In this manner it makes the data dependent on business objects but independent of business processes." (Charles D Tupper, "Data Architecture: From Zen to Reality", 2011)

"A defining characteristic of the data lakehouse architecture is allowing direct access to data as files while retaining the valuable properties of a data warehouse. Just do both!" (Bill Inmon et al, "Building the Data Lakehouse", 2021)

"Data lake architecture suffers from complexity and deterioration. It creates complex and unwieldy pipelines of batch or streaming jobs operated by a central team of hyper-specialized data engineers. It deteriorates over time. Its unmanaged datasets, which are often untrusted and inaccessible, provide little value. The data lineage and dependencies are obscured and hard to track." (Zhamak Dehghani, "Data Mesh: Delivering Data-Driven Value at Scale", 2021)

"Data mesh [...] reduces points of centralization that act as coordination bottlenecks. It finds a new way of decomposing the data architecture without slowing the organization down with synchronizations. It removes the gap between where the data originates and where it gets used and removes the accidental complexities - aka pipelines - that happen in between the two planes of data. Data mesh departs from data myths such as a single source of truth, or one tightly controlled canonical data model." (Zhamak Dehghani, "Data Mesh: Delivering Data-Driven Value at Scale", 2021)

"The data lakehouse architecture presents an opportunity comparable to the one seen during the early years of the data warehouse market. The unique ability of the lakehouse to manage data in an open environment, blend all varieties of data from all parts of the enterprise, and combine the data science focus of the data lake with the end user analytics of the data warehouse will unlock incredible value for organizations. [...] "The lakehouse architecture equally makes it natural to manage and apply models where the data lives." (Bill Inmon et al, "Building the Data Lakehouse", 2021)

"Data architecture is the structure that enables the storage, transformation, exploitation, and governance of data." (Pradeep Menon, "Data Lakehouse in Action", 2022)

"A data architecture needs to have the robustness and ability to support multiple data management and operational models to provide the necessary business value and agility to support an enterprise’s business strategy and capabilities." (Sonia Mezzetta, "Principles of Data Fabric", 2023)

"Data architecture is the process of designing and building complex data platforms. This involves taking a comprehensive view, which includes not only moving and storing data but also all aspects of the data platform. Building a well-designed data ecosystem can be transformative to a business." (Brian Lipp, "Modern Data Architectures with Python", 2023)

"Enterprises have difficulties in interpreting new concepts like the data mesh and data fabric, because pragmatic guidance and experiences from the field are missing. In addition to that, the data mesh fully embraces a decentralized approach, which is a transformational change not only for the data architecture and technology, but even more so for organization and processes. This means the transformation cannot only be led by IT; it’s a business transformation as well." (Piethein Strengholt, "Data Management at Scale: Modern Data Architecture with Data Mesh and Data Fabric" 2nd Ed., 2023)

"A data architecture defines a high-level architectural approach and concept to follow, outlines a set of technologies to use, and states the flow of data that will be used to build your data solution to capture big data. [...] Data architecture refers to the overall design and organization of data within an information system." (James Serra, "Deciphering Data Architectures", 2024)

"A data mesh is a decentralized data architecture with four specific characteristics. First, it requires independent teams within designated domains to own their analytical data. Second, in a data mesh, data is treated and served as a product to help the data consumer to discover, trust, and utilize it for whatever purpose they like. Third, it relies on automated infrastructure provisioning. And fourth, it uses governance to ensure that all the independent data products are secure and follow global rules." (James Serra, "Deciphering Data Architectures", 2024)

"The goal of any data architecture solution you build should be to make it quick and easy for any end user, no matter what their technical skills are, to query the data and to create reports and dashboards." (James Serra, "Deciphering Data Architectures", 2024)

23 February 2017

Data Management: Data Integration (Definitions)

"The process of coherently using data from across platforms. applications or business units. Data integration ensures that data from different sources is merged allowing silos of data to be combined." (Tony Fisher, "The Data Asset", 2009)

"The planned and controlled:
a) merge using some form of reference,
b) transformation using a set of business rules, and
c) flow of data from a source to a target, for operational and/or analytical use. Data needs to be accessed and extracted, moved, validated and cleansed, standardized, transformed, and loaded. (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"The collection of data from various sources with the same significance into one uniform record. This data may be physically integrated, for example, into a data warehouse or virtually, meaning that the data will remain in the source systems, however will be accessed using a uniform view." (Boris Otto & Hubert Österle, "Corporate Data Quality", 2015)

"Data integration comprises the activities, techniques, and tools required to consolidate and harmonize data from different (multiple) sources into a unified view. The processes of extract, transform, and load (ETL) are part of this discipline." (Piethein Strengholt, "Data Management at Scale", 2020)

"Pulling together and reconciling dispersed data for analytic purposes that organizations have maintained in multiple, heterogeneous systems. Data needs to be accessed and extracted, moved and loaded, validated and cleaned, and standardized and transformed." (Information Management)

"The combination of technical and business processes used to combine data from disparate sources into meaningful insights." (Solutions Review)

"The process of retrieving and combining data from different sources into a unified set for users, organizations, and applications." (MuleSoft) 

"Data integration is the practice of consolidating data from disparate sources into a single dataset with the ultimate goal of providing users with consistent access and delivery of data across the spectrum of subjects and structure types, and to meet the information needs of all applications and business processes." (OmiSci) [source]

"Data integration is the process of combining data from multiple source systems to create unified sets of information for both operational and analytical uses." (Techtarget)

"Data integration is the process of bringing data from disparate sources together to provide users with a unified view. The premise of data integration is to make data more freely available and easier to consume and process by systems and users." (Tibco) [source]

"Data integration is the process of retrieving and combining data from different sources into a unified set of data. Data integration can be used to combine data for users, organizations, and applications." (kloudless)

"Data integration is the process of taking data from multiple disparate sources and collating it in a single location, such as a data warehouse. Once integrated, data can then be used for detailed analytics or to power other enterprise applications." (Xplenty) [source]

"Data integration is the process used to combine data from disparate sources into a unified view that can provide valuable and actionable information." (snowflake) [source]

"Data integration refers to the technical and business processes used to combine data from multiple sources to provide a unified, single view of the data." (OmiSci) [source]

"The discipline of data integration comprises the practices, architectural techniques and tools for achieving the consistent access and delivery of data across the spectrum of data subject areas and data structure types in the enterprise to meet the data consumption requirements of all applications and business processes." (Gartner)

18 February 2017

Data Management: Data Migration (Definitions)

"The process of extracting data from operational systems to a data warehouse with minimal effect on the source systems, and the transformation of the source data into a format consistent with the design and requirements of the data warehouse." (Microsoft Corporation, "Microsoft SQL Server 7.0 System Administration Training Kit", 1999)

"The movement of data from one storage system to another." (Tom Petrocelli, "Data Protection and Information Lifecycle Management", 2005)

"Form- and function-preserving movement of data between locations or formats, which can take one of three forms: (1) from one physical system or location to another; (2) from one physical format to another; or (3) from one logical format to another." (David G Hill, "Data Protection: Governance, Risk Management, and Compliance", 2009)

"The process of migrating data to or from an application such as moving data from a sales force automation tool to a data warehouse." (Tony Fisher, "The Data Asset", 2009)

"The process of transferring data from one database to another." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"Moving a specific set of data from one system to the other. It contains a source system where the data resides prior to the migration, criteria determining the scope of data to be migrated, a transformation that the data will go through, and a destination system where the data will be inserted after." (MuleSoft)

"The process of moving data between two or more storage systems, data formats, warehouses or servers." (Solutions Review)

"A Data Migration is the process of transferring data between computer storage systems or file formats. They greatly range in scale and can often be significant projects including numerous preparation and reconciliation activities." (experian) [source]

"Data migration is the process of moving data between different types of storage or file formats. Cloud data migration refers to transferring or replicating data from on-premise systems to cloud-based storage." (Qlik) [source]

"Data migration is the process of moving a specific set of data from one system to the other. One system is the source system where the data resides prior to the migration, and one system is the destination system where the data will be inserted after." (kloudless)

"Data migration is the process of transferring data between data storage systems, data formats or computer systems." (Techtarget) [source]

"Data migration is the process of transferring data from repository to another." (Information Management)

15 February 2017

Data Management: Data Architecture (Definitions)

"Data Architecture is the design of data for use in defining the target state and the subsequent planning needed to hit the target state. Data architecture includes topics such as database design, information integration, metadata management, business semantics, data modeling, metadata workflow management, and archiving." (Martin Oberhofer et al, "Enterprise Master Data Management", 2008)

"Describes how data is organized and structured to support the development, maintenance, and use of the data by application systems. This includes guidelines and recommendations for historical retention of the data, and how the data is to be used and accessed." (Laura Reeves, "A Manager's Guide to Data Warehousing", 2009)

"the organized arrangement of components to optimize the function, performance, feasibility, cost, and/or aesthetics of an overall structure." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"The logical-data architecture describes the specific data elements held by the team in a platform-agnostic and business-friendly manner. It plots out the specific tables, fields, and relationships within the team’s data assets and is usually fully normalized to minimize redundancy and represents the highest level of design efficiency possible." (Evan Stubbs, "Delivering Business Analytics: Practical Guidelines for Best Practice", 2013)

"The physical-data architecture is the lowest level of detail in data architecture. It describes how the logical architecture is actually implemented within the data mart and describes elements by their technical (rather than business) names." (Evan Stubbs, "Delivering Business Analytics: Practical Guidelines for Best Practice", 2013)

"Defines a company-wide, uniform model of corporate data (the corporate data model). It also describes the architecture for the distribution and retention of data. This describes which data will be stored in which systems, which systems are single sources of truth for which data objects or attributes and the flow of data between the systems." (Boris Otto & Hubert Österle, "Corporate Data Quality", 2015)

"One of the layers of the enterprise architecture (EA) that focuses on the IT data architecture side, both for transactional and business intelligence IT data architecture." (David K Pham, "From Business Strategy to Information Technology Roadmap", 2016)

"The discipline, methods, and outputs related to understanding data, its meaning and relationships." (Gregory Lampshire et al, "The Data and Analytics Playbook", 2016)

"Models, policies, and guidelines that structure how data are collected, stored, used, managed, and integrated within an organization." (Jonathan Ferrar et al, "The Power of People", 2017)

"Data architecture is the structure that enables the storage, transformation, exploitation, and governance of data." (Pradeep Menon, "Data Lakehouse in Action", 2022)

"Data architecture is the process of designing and building complex data platforms. This involves taking a comprehensive view, which includes not only moving and storing data but also all aspects of the data platform. Building a well-designed data ecosystem can be transformative to a business." (Brian Lipp, "Modern Data Architectures with Python", 2023)

"A data architecture defines a high-level architectural approach and concept to follow, outlines a set of technologies to use, and states the flow of data that will be used to build your data solution to capture big data. [...] Data architecture refers to the overall design and organization of data within an information system." (James Serra, "Deciphering Data Architectures", 2024)

"Data architecture encompasses the rules, policies, models, and standards that govern data collection and how that data is then stored, managed, processed, and used within an organization’s databases and data systems." (snowflake) [source]

"Data architecture is the process by which an organization aligns its data environment with its operational goals." (Xplenty) [source]

01 February 2017

Data Management: Data Management (Definitions)

"The day-to-day tasks necessary to tactically manage data, including overseeing its quality, lineage, usage, and deployment across systems, organizations, and user communities." (Jill Dyché & Evan Levy, "Customer Data Integration", 2006)

"A corporate service which helps with the provision of information services by controlling or coordinating the definitions and usage of reliable and relevant data." (Keith Gordon, "Principles of Data Management", 2007)

"The policies, procedures, and technologies that dictate the granular management of data in an organization. This includes supervising the quality of data and ensuring it is used and deployed properly." (Tony Fisher, "The Data Asset", 2009)

"Structured approach for capturing, storing, processing, integrating, distributing, securing, and archiving data effectively throughout their life cycle." (Linda Volonino & Efraim Turban, "Information Technology for Management 8th Ed", 2011)

"The business function that develops and executes plans, policies, practices, and projects that acquire, control, protect, deliver, and enhance the value of data." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"The process of managing data as a resource that is valuable to an organization or business, including the process of developing data architectures, developing practices and procedures for dealing with data, and then executing these aspects on a regular basis." (Jim Davis & Aiman Zeid, "Business Transformation", 2014)

"Processes by which data across multiple platforms is integrated, cleansed, migrated, and managed." (Hamid R Arabnia et al, "Application of Big Data for National Security", 2015)

"The full lifecycle care of organizational data assets, through the implementation of accepted good practice, to develop and maintain their value." (Kevin J Sweeney, "Re-Imagining Data Governance", 2018)

"Controlling, protecting, and facilitating access to data in order to provide information consumers with timely access to the data they need. The functions provided by a database management system." (Information Management)

"The development and execution of architectures, policies and practices to manage the data life-cycle needs of an enterprise." (Solutions Review)

"The policies, procedures, and technical choices used to handle data through its entire lifecycle from data collection to storage, preservation and use. A data management policy should take account of the needs of data quality, availability, data protection, data preservation, etc." (Open Data Handbook) 

"The processes, procedures, policies, technologies, and architecture that manage data from definition to destruction, which includes transformation, governance, quality, security, and availability throughout its life cycle." (Forrester)

"The process by which data is acquired, validated, stored, protected, and processed. In turn, its accessibility, reliability, and timeliness is ensured to satisfy the needs of the data users. Data management properly oversees the full data lifecycle needs of an enterprise." (Insight Software)

"Data management comprises all the disciplines related to ingesting, organizing, and maintaining data as a valuable resource." (OmiSci) [source]

"Data management (DM) consists of the practices, architectural techniques, and tools for achieving consistent access to and delivery of data across the spectrum of data subject areas and data structure types in the enterprise, to meet the data consumption requirements of all applications and business processes." (Gartner)

"Data management consists of practices and tools used to ingest, store, organize, and maintain the data created and gathered by an organization in order to deliver reliable and timely data to users." (Qlik) [source]

"Data management is a strategy used by organizations to make data secure, efficient, and available for any relevant business purposes." (Xplenty) [source]

"Data management is the implementation of policies and procedures that put organizations in control of their business data regardless of where it resides. […] Data management is concerned with the end-to-end lifecycle of data, from creation to retirement, and the controlled progression of data to and from each stage within its lifecycle." (Informatica) [source]

"The function of controlling the acquisition, analysis, storage, retrieval, and distribution of data." (IEEE 610.5-1990)


Related Posts Plugin for WordPress, Blogger...

About Me

My photo
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.