20 March 2017

⛏️Data Management: Data Structure (Definitions)

"A logical relationship among data elements that is designed to support specific data manipulation functions (trees, lists, and tables)." (William H Inmon, "Building the Data Warehouse", 2005)

"Data stored in a computer in a way that (usually) allows efficient retrieval of the data. Arrays and hashes are examples of data structures." (Michael Fitzgerald, "Learning Ruby", 2007)

"A data structure in computer science is a way of storing data to be used efficiently." (Sahar Shabanah, "Computer Games for Algorithm Learning", 2011)

"Data structure is a general term referring to how data is organized. In modeling, it refers more specifically to the model itself. Tables are referred to as 'structures'." (Laura Sebastian-Coleman, "Measuring Data Quality for Ongoing Improvement ", 2012)

[probabilistic *] "A data structure which exploits randomness to boost its efficiency, for example skip lists and Bloom filters. In the case of Bloom filters, the results of certain operations may be incorrect with a small probability." (Wei-Chih Huang & William J Knottenbelt, "Low-Overhead Development of Scalable Resource-Efficient Software Systems", 2014)

"A collection of methods for storing and organizing sets of data in order to facilitate access to them. More formally data structures are concise implementations of abstract data types, where an abstract data type is a set of objects together with a collection of operations on the elements of the set." (Ioannis Kouris et al, "Indexing and Compressing Text", 2015)

"A representation of the logical relationship between elements of data." (Adam Gordon, "Official (ISC)2 Guide to the CISSP CBK" 4th Ed., 2015)

"Is a schematic organization of data and relationship to express a reality of interest, usually represented in a diagrammatic form." (Maria T Artese  Isabella Gagliardi, "UNESCO Intangible Cultural Heritage Management on the Web", 2015)

"The implementation of a composite data field in an abstract data type" (Nell Dale & John Lewis, "Computer Science Illuminated" 6th Ed., 2015)

"A way of organizing data so that it can be efficiently accessed and updated." (Vasileios Zois et al, "Querying of Time Series for Big Data Analytics", 2016)

"A particular way of storing information, allowing to a high level approach on the software implementation." (Katia Tannous & Fillipe de Souza Silva, "Particle Shape Analysis Using Digital Image Processing", 2018)

"It is a particular way of organizing data in a computer so that they can be used efficiently." (Edgar C Franco et al, "Implementation of an Intelligent Model Based on Machine Learning in the Application of Macro-Ergonomic Methods...", 2019)

"Way information is represented and stored." (Shalin Hai-Jew, "Methods for Analyzing and Leveraging Online Learning Data", 2019)

"A physical or logical relationship among a collection of data elements." (IEEE 610.5-1990)

⛏️Data Management: Data Sharing (Definitions)

"The ability to share individual pieces of data transparently from a database across different applications." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"Exchange of data and/or meta-data in a situation involving the use of open, freely available data formats, where process patterns are known and standard, and where not limited by privacy and confidentiality regulations." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"Data sharing involves one entity sending data to another entity, usually with the understanding that the other entity will store and use the data. This process may involve free or purchased data, and it may be done willingly, or in compliance with regulations, laws, or court orders." (Jules H Berman, "Principles of Big Data: Preparing, Sharing, and Analyzing Complex Information", 2013)

"The ability of subsystems or application programs to access data directly and to change it while maintaining data integrity." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

"The ability of two or more DB2 subsystems to directly access and change a single set of data." (BMC)


⛏️Data Management: Information Overload (Definitions)

"A state in which information can no longer be internalized productively by the individual due to time constraints or the large volume of received information." (Martin J Eppler, "Managing Information Quality" 2nd Ed., 2006)

"Phenomena related to the inability to absorb and manage effectively large amounts of information, creating inefficiencies, stress, and frustration. It has been exacerbated by advances in the generation, storage, and electronic communication of information." (Glenn J Myatt, "Making Sense of Data: A Practical Guide to Exploratory Data Analysis and Data Mining", 2006)

"A situation where relevant information becomes buried in a mass of irrelevant information" (Josep C Morales, "Information Disasters in Networked Organizations", 2008)

"A situation where individuals have access to so much information that it becomes impossible for them to function effectively, sometimes leading to where nothing gets done and the user gives the impression of being a rabbit caught in the glare of car headlights." Alan Pritchard, "Information-Rich Learning Concepts", 2009)

"is the situation when the information processing requirements exceed the information processing capacities." (Jeroen ter Heerdt & Tanya Bondarouk, "Information Overload in the New World of Work: Qualitative Study into the Reasons", 2009)

"Refers to an excess amount of information, making it difficult for individuals to effectively absorb and use information; increases the likelihood of poor decisions." (Leslie G Eldenburg & Susan K Wolcott, "Cost Management" 2nd Ed., 2011)

"The inability to cope with or process ever-growing amounts of data into our lives." (Linda Volonino & Efraim Turban, "Information Technology for Management" 8th Ed., 2011)

"The state where the rate or amount of input to a system or person outstrips the capacity or speed of processing that input successfully." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"The state in which a huge influx of information interferes with understanding an issue, making good decisions, and performance on the job." (Carol A. Brown, "Economic Impact of Information and Communication Technology in Higher Education", 2014)

"The difficulty a person can have understanding an issue and making decisions that can be caused by the presence of too much information." (Li Chen, "Mobile Technostress", Encyclopedia of Mobile Phone Behavior, 2015)

"Occurs when excess of information suffocates businesses and causes employees to suffer mental anguish and physical illness. Information overload causes high levels of stress that can result in health problems and the breakdown of individuals’ personal relationships." (Sérgio Maravilhas & Sérgio R G Oliveira, "Entrepreneurship and Innovation: The Search for the Business Idea", 2018)

"A set of subjective and objective difficulties, mainly originating in the amount and complexity of information available and people’s inability to handle such situations." (Tibor Koltay, "Information Overload", 2021)

19 March 2017

⛏️Data Management: Encryption (Definitions)

"A method for keeping sensitive information confidential by changing data into an unreadable form." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"The encoding of data so that the plain text is transformed into something unintelligible, called cipher text." (Tom Petrocelli, "Data Protection and Information Lifecycle Management", 2005)

"Reordering of bits of data to make it unintelligible (and therefore useless) to an unauthorized third party, while still enabling the authorized user to use the data after the reverse process of decryption." (David G Hill, "Data Protection: Governance, Risk Management, and Compliance", 2009)

"To transform information from readable plain text to unreadable cipher text to prevent unintended recipients from reading the data." (Janice M Roehl-Anderson, "IT Best Practices for Financial Managers", 2010)

"The process of transforming data using an algorithm (called a cipher) to make it unreadable to anyone except those possessing special knowledge, usually referred to as a key." (Craig S Mullins, "Database Administration", 2012)

"The process of converting readable data (plaintext) into a coded form (ciphertext) to prevent it from being read by an unauthorized party." (Microsoft, "SQL Server 2012 Glossary", 2012)

"The cryptographic transformation of data to produce ciphertext." (Manish Agrawal, "Information Security and IT Risk Management", 2014)

"The process of scrambling data in such a way that it is unreadable by unauthorized users but can be unscrambled by authorized users to be readable again." (Weiss, "Auditing IT Infrastructures for Compliance, 2nd Ed", 2015)

"The transformation of plaintext into unreadable ciphertext." (Shon Harris & Fernando Maymi, "CISSP All-in-One Exam Guide, 8th Ed", 2018)

"In computer security, the process of transforming data into an unintelligible form in such a way that the original data either cannot be obtained or can be obtained only by using a decryption process." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

"Encryption is about translating the data into complex codes that cannot be interpreted (decrypted) without the use of a decryption key. These keys are typically distributed and stored separately. There are two types of encryption: symmetric key encryption and public key encryption. In symmetric key encryption, the key to both encrypt and decrypt is exactly the same. Public key encryption has two different keys. One key is used to encrypt the values (the public key), and one key is used to decrypt the data (the private key)." (Piethein Strengholt, "Data Management at Scale", 2020)

"The process of encoding data in such a way to prevent unauthorized access." (AICPA)

15 March 2017

⛏️Data Management: Data Conversion (Definitions)

"The function to translate data from one format to another" (Yang Xiang & Daxin Tian, "Multi-Core Supported Deep Packet Inspection", 2010)

"1.In systems, the migration from the use of one application to another. 2.In data management, the process of preparing, reengineering, cleansing and transforming data and loading it into a new target data structure. Typically, the term is used to describe a one-time event as part of a new database implementation. However, it is sometimes used to describe an ongoing operational procedure." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"(1)The process of changing data structure, format, or contents to comply with some rule or measurement requirement. (2)The process of changing data contents stored in one system so that it can be stored in another system, or used by an application." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"The process of automatically reading data in one file format and emitting the same data in a different format, thus making the data accessible to a wider range of applications." (Open Data Handbook)

"To change data from one form of representation to another; for example, to convert data from an ASCII representation to an EBCDIC representation." (IEEE 610.5-1990)

⛏️Data Management: Data Compression (Definitions)

"any kind of data reduction method that preserves the application-specific information." (Teuvo Kohonen, "Self-Organizing Maps 3rd Ed.", 2001)

"The process of reducing the size of data by use of mathematical algorithms." (Tom Petrocelli, "Data Protection and Information Lifecycle Management", 2005)

"1.Algorithms or techniques that change data to a smaller physical size that contains the same information. 2.The process of changing data to be stored in a smaller physical or logical space." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"Encoding information in such a way that its representation consumes less space in memory" (Hasso Plattner, "A Course in In-Memory Data Management: The Inner Mechanics of In-Memory Databases 2nd Ed.", 2014)

"Compression is a data management technique that uses repeating patterns in data to reduce the storage needed to hold the data. A compression algorithm for databases should perform compression and decompression operations as fast as possible. This often entails a trade-off between the speed of compression/decompression and the size of the compressed data. Faster compression algorithms can lead to larger compressed data than other, slower algorithms." (Dan Sullivan, "NoSQL for Mere Mortals®", 2015)

"Reducing the amount of space needed to store a piece of data" (Nell Dale & John Lewis, "Computer Science Illuminated" 6th Ed., 2015)

"The process of reducing the size of a data file by encoding information using fewer bits than the original file." (Faithe Wempen, "Computing Fundamentals: Introduction to Computers", 2015)

"A method that reduces the amount of space needed for storing data. See also client compression and hardware compression." (CommVault, "Documentation 11.20", 2018)

"Any technique used to reduce the amount of storage required to store data." (IEEE 610.5-1990)

14 March 2017

⛏️Data Management: Data Protection (Definitions)

"The protecting of data from damage, destruction, and unauthorized alteration." (Tom Petrocelli, "Data Protection and Information Lifecycle Management", 2005)

"Deals with issues such as data security, privacy, and availability. Data protection controls are required by regulations and industry mandates such as Sarbanes-Oxley, European Data Protection Law, and others." (Allen Dreibelbis et al, "Enterprise Master Data Management", 2008)

"A set of rules that aim to protect the rights, freedoms and interests of individuals when information related to them is being processed." (Maria Tzanou, "Data Protection in EU Law after Lisbon: Challenges, Developments, and Limitations", 2015)

"An umbrella term for various procedures that ensure information is secure and available only to authorized users." (Peter Sasvari & Zoltán Nagymate, "The Empirical Analysis of Cloud Computing Services among the Hungarian Enterprises", 2015)

"Protection of the data against unauthorized access by third parties as well as protection of personal data (such as customer data) in the processing of data according to the applicable legal provisions." (Boris Otto & Hubert Österle, "Corporate Data Quality", 2015)

"Legal control over access to, and use of, data in computers." (Lucy Self & Petros Chamakiotis, "Understanding Cloud Computing in a Higher Education Context", 2018)

"Data protection is a task of safeguarding personal or sensitive data which are complex and widely distributed." (M Fevzi Esen & Eda Kocabas, "Personal Data Privacy and Protection in the Meeting, Incentive, Convention, and Exhibition (MICE) Industry", 2019)

"Process of protecting important information from corruption, compromise, or loss." (Patrícia C T Gonçalves, "Medical Social Networks, Epidemiology and Health Systems", 2021)

"The process involving use of laws to protect data of individuals from unauthorized disclosure or access." (Frank Makoza, "Learning From Abroad on SIM Card Registration Policy: The Case of Malawi", 2019)

"Is the process in information and communication technology that deals with the ability an organization or individual to safeguard data and information from corruption, theft, compromise, or loss." (Valerianus Hashiyana et al, "Integrated Big Data E-Healthcare Solutions to a Fragmented Health Information System in Namibia", 2021)

"The mechanisms with which an organization enables individuals to retain control of the personal data they willingly share, where security provides policies, controls, protocols, and technologies necessary to fulfill rules and obligations in accordance with privacy regulations, industry standards, and the organization's ethics and social responsibility." (Forrester)

06 March 2017

⛏️Data Management: Audit Trail (Definitions)

"Audit records stored in the sybsecurity database." (Karen Paulsell et al, "Sybase SQL Server: Performance and Tuning Guide", 1996)

"A record of what happened to data from its inception to its current state. Audit trails help verify the integrity of data." (Microsoft Corporation, "Microsoft SQL Server 7.0 Data Warehouse Training Kit", 2000)

"Data maintained to trace activity, such as a transaction log, for purposes of recovery or audit." (Craig S Mullins, "Database Administration", 2012)

"A chronological record of activities on information resources that enables the reconstruction and examination of sequences of activities on those information resources for later review." (Mark Rhodes-Ousley, "Information Security: The Complete Reference, Second Edition" 2nd Ed., 2013)

"A trace of a sequence of events in a clerical or computer system. This audit usually identifies the creation or modification of any element in the system, who did it, and (possibly) why it was done." (Marcia Kaufman et al, "Big Data For Dummies", 2013)

"A chronological record of events or transactions. An audit trail is used for examining or reconstructing a sequence of events or transactions, managing security, and recovering lost transactions." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

 "A path by which the original input to a process (e.g. data) can be traced back through the process, taking the process output as a starting point. This facilitates result checking and allows a process audit to be carried out [after TMap]." (Software Quality Assurance)

05 March 2017

⛏️Data Management: System of Record (Definitions)

"The system that definitively specifies data values. In dealing with redundant data, you can have values that should be the same but disagree. The system of record is the system you go back to, in order to verify the true value of the data." (Microsoft Corporation, "Microsoft SQL Server 7.0 Data Warehouse Training Kit", 2000)

"The definitive and singular source of operational data. If data element ABC has a value of 25 in a database record but a value of 45 in the system of record, by definition, the first value is incorrect and must be reconciled. The system of record is useful for managing redundancy of data." (William H Inmon, "Building the Data Warehouse", 2005)

"The single authoritative, enterprise-designated source of operational data. It is the most current, accurate source of its data." (David Lyle & John G Schmidt, "Lean Integration", 2010)

"A system that stores the 'official' version of a data attribute." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"The system of record is a system that is charged with keeping the most complete or trustworthy representation of a set of entities. Within the practice of master data management, such representations are referred to as golden records and the system of record can also be called the system of truth." (Laura Sebastian-Coleman, "Measuring Data Quality for Ongoing Improvement ", 2012)

"Records from which information is retrieved by the name, identifying number, symbol, or other identifying particular assigned to the individual. Sometimes abbreviated as SOR." ( Manish Agrawal, "Information Security and IT Risk Management", 2014)

"An information storage system (commonly implemented on a computer system) that is the authoritative data source for a given data element or piece of information. The need to identify systems of record can become acute in organizations where management information systems have been built by taking output data from multiple-source systems, reprocessing this data, and then re-presenting the result for a new business use." (Janice M Roehl-Anderson, "IT Best Practices for Financial Managers", 2010)

28 February 2017

⛏️Data Management: Data Lifecycle (Definitions)

[Data Lifecycle Management (DLM):" "The process by which data is moved to different mass storage devices based on its age." (Tom Petrocelli, "Data Protection and Information Lifecycle Management", 2005)

[master data lifecycle management:] "Supports the definition, creation, access, and management of master data. Master data must be managed and leveraged effectively throughout its entire lifecycle." (Allen Dreibelbis et al, "Enterprise Master Data Management", 2008)

[Data lifecycle management (DLM):] "Managing data as blocks without underlying knowledge of the content of the blocks, based on limited metadata (e.g., creation date, last accessed)." (David G Hill, "Data Protection: Governance, Risk Management, and Compliance", 2009)

"The data life cycle is the set of processes a dataset goes through from its origin through its use(s) to its retirement. Data that moves through multiple systems and multiple uses has a complex life cycle. Danette McGilvray’s POSMAD formulation identifies the phases of the life cycle as: planning for, obtaining, storing and sharing, maintaining, applying, and disposing of data." (Laura Sebastian-Coleman, "Measuring Data Quality for Ongoing Improvement ", 2012)

"The recognition that as data ages, that data takes on different characteristics" (Daniel Linstedt & W H Inmon, "Data Architecture: A Primer for the Data Scientist", 2014)

"The development of a record in the company’s IT systems from its creation until its deletion. This process may also be designated as 'CRUD', an acronym for the Create, Read/Retrieve, Update and Delete database operations." (Boris Otto & Hubert Österle, "Corporate Data Quality", 2015)

"The series of stages that data moves though from initiation, to creation, to destruction. Example: the data life cycle of customer data has four distinct phases and lasts approximately eight years." (Gregory Lampshire, "The Data and Analytics Playbook", 2016)

"covers the period of time from data origination to the time when data are no longer considered useful or otherwise disposed of. The data lifecycle includes three phases, the origination phase during which data are first collected, the active phase during which data are accumulating and changing, and the inactive phase during which data are no longer expected to accumulate or change, but during which data are maintained for possible use." (Meredith Zozus, "The Data Book: Collection and Management of Research Data", 2017)

"The complete set of development stages from creation to disposal, each with its own characteristics and management responsibilities, through which organizational data assets pass." (Kevin J Sweeney, "Re-Imagining Data Governance", 2018)

"An illustrative phrase describing the many manifestations of data from its raw, unanalyzed state, such as survey data, to intellectual property, such as blueprints." (Sue Milton, "Data Privacy vs. Data Security", 2021)

"Refers to all the stages in the existence of digital information from creation to destruction. A lifecycle view is used to enable active management of the data objects and resource over time, thus maintaining accessibility and usability." (CODATA)

🧊Data Warehousing: Data Load Optimization (Part I: A Success Story)

Data Warehousing
Data Warehousing Series

Introduction

This topic has been waiting in the queue for almost two years already - since I finished optimizing an already existing relational data warehouse within a SQL Server 2012 Enterprise Edition environment. Through various simple techniques I managed then to reduce the running time for the load process by more than 65%, from 9 to 3 hours. It’s a considerable performance gain, considering that I didn’t have to refactor any business logic implemented in queries.

The ETL (Extract, Transform, Load) solution was making use of SSIS (SQL Server Integration Services) packages to load data sequentially from several sources into staging tables, and from stating further into base tables. Each package was responsible for deleting the data from the staging tables via TRUNCATE, extracting the data 1:1 from the source into the staging tables, then loading the data 1:1 from the staging table to base tables. It’s the simplest and a relatively effective ETL design I also used with small alterations for data warehouse solutions. For months the data load worked smoothly, until data growth and eventually other problems increased the loading time from 5 to 9 hours.

Using TABLOCK Hint

Using SSIS to bulk load data into SQL Server provides an optimum of performance and flexibility. Within a Data Flow, when “Table Lock” property on the destination is checked, it implies that the insert records are minimally logged, speeding up the load by a factor of two. The TABLOCK hint can be used also for other insert operations performed outside of SSIS packages. At least in this case the movement of data from staging into base tables was performed in plain T-SQL, outside of SSIS packages. Also further data processing had benefitted from this change. Only this optimization step alone provided 30-40% performance gain.

Drop/Recreating the Indexes on Big Tables

As the base tables were having several indexes each, it proved beneficial to drop the indexes for the big tables (e.g. with more than 1000000 records) before loading the data into the base tables, and recreate the indexes afterwards. This was done within SSIS, and provided an additional 20-30% performance gain from the previous step.

Consolidating the Indexes

Adding missing indexes, removing or consolidating (overlapping) indexes are typical index maintenance tasks, apparently occasionally ignored. It doesn’t always bring much performance as compared with the previous methods, though dropping and consolidating some indexes proved to be beneficial as fewer data were maintained. Data processing logic benefited from the creation of new indexes as well.

Running Packages in Parallel

As the packages were run sequentially (one package at a time), the data load was hardly taking advantage of the processing power available on the server. Even if queries could use parallelism, the benefit was minimal. Enabling packages run in parallel added additional performance gain, however this minimized the availability of processing resources for other tasks. When the data load is performed overnight, this causes minimal overhead, however it should be avoided when the data are loading to business hours.

Using Nonclustered Indexes

In my analysis I found out that many tables, especially the ones storing prepared data, were lacking a clustered index, even if further indexes were built on them. I remember that years back there was a (false) myth that fact and/or dimension tables don’t need clustered indexes in SQL Server. Of course clustered indexes have downsides (e.g. fragmentation, excessive key-lookups) though their benefits exceed by far the downsides. Besides missing clustered index, there were cases in which the tables would have benefited from having a narrow clustered index, instead of a multicolumn wide clustered index. Upon case also such cases were addressed.

Removing the Staging Tables

Given the fact that the source and target systems are in the same virtual environment, and the data are loaded 1:1 between the various layers, without further transformations and conversions, one could load the data directly into the base tables. After some tests I came to the conclusion that the load from source tables into the staging table, and the load from staging table into base table (with TABLOCK hint) were taking almost the same amount of time. This means that the base tables will be for the same amount of the time unavailable, if the data were loaded from the sources directly into the base tables. Therefore one could in theory remove the staging tables from the architecture. Frankly, one should think twice when doing such a change, as there can be further implications in time. Even if today the data are imported 1:1, in the future this could change.

Reducing the Data Volume

Reducing the data volume was identified as a possible further technique to reduce the amount of time needed for data loading. A data warehouse is built based on a set of requirements and presumptions that change over time. It can happen for example that even if the reports need only 1-2 years’ worth of data, the data load considers a much bigger timeframe. Some systems can have up to 5-10 years’ worth of data. Loading all data without a specific requirement leads to waste of resources and bigger load times. Limiting the transactional data to a given timeframe can make a considerable difference. Additionally, there are historical data that have the potential to be archived.

There are also tables for which a weekly or monthly refresh would suffice. Some tables or even data sources can become obsolete, however they continue to be loaded in the data warehouse. Such cases occur seldom, though they occur. Also some unused or redundant column could have been removed from the packages.

Further Thoughts

There are further techniques to optimize the data load within a data warehouse like partitioning large tables, using columnstore indexes or optimizing the storage, however my target was to provide maximum sufficient performance gain with minimum of effort and design changes. Therefore I stopped when I considered that the amount of effort is considerable higher than the performance gain.

Further Reading:
[1] TechNet (2009) The Data Loading Performance Guide, by Thomas Kejser, Peter Carlin & Stuart Ozer (link)
[2] MSDN (2010) Best Practices for Data Warehousing with SQL Server 2008 R2, by Mark Whitehorn, Keith Burns & Eric N Hanson (link)
[3] MSDN (2012) Whitepaper: Fast Track Data Warehouse Reference Guide for SQL Server 2012, by Eric Kraemer, Mike Bassett, Eric Lemoine & Dave Withers (link)
[4] MSDN (2008) Best Practices for Data Warehousing with SQL Server 2008, by Mark Whitehorn & Keith Burns (link)
[5] TechNet (2005) Strategies for Partitioning Relational Data Warehouses in Microsoft SQL Server, by Gandhi Swaminathan (link)
[6] SQL Server Customer Advisory Team (2013) Top 10 Best Practices for Building a Large Scale Relational Data Warehouse (link)

23 February 2017

⛏️Data Management: Data Integration (Definitions)

"The process of coherently using data from across platforms. applications or business units. Data integration ensures that data from different sources is merged allowing silos of data to be combined." (Tony Fisher, "The Data Asset", 2009)

"The planned and controlled:
a) merge using some form of reference,
b) transformation using a set of business rules, and
c) flow of data from a source to a target, for operational and/or analytical use. Data needs to be accessed and extracted, moved, validated and cleansed, standardized, transformed, and loaded. (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"The collection of data from various sources with the same significance into one uniform record. This data may be physically integrated, for example, into a data warehouse or virtually, meaning that the data will remain in the source systems, however will be accessed using a uniform view." (Boris Otto & Hubert Österle, "Corporate Data Quality", 2015)

"Data integration comprises the activities, techniques, and tools required to consolidate and harmonize data from different (multiple) sources into a unified view. The processes of extract, transform, and load (ETL) are part of this discipline." (Piethein Strengholt, "Data Management at Scale", 2020)

"Pulling together and reconciling dispersed data for analytic purposes that organizations have maintained in multiple, heterogeneous systems. Data needs to be accessed and extracted, moved and loaded, validated and cleaned, and standardized and transformed." (Information Management)

"The combination of technical and business processes used to combine data from disparate sources into meaningful insights." (Solutions Review)

"The process of retrieving and combining data from different sources into a unified set for users, organizations, and applications." (MuleSoft) 

"Data integration is the practice of consolidating data from disparate sources into a single dataset with the ultimate goal of providing users with consistent access and delivery of data across the spectrum of subjects and structure types, and to meet the information needs of all applications and business processes." (OmiSci) [source]

"Data integration is the process of combining data from multiple source systems to create unified sets of information for both operational and analytical uses." (Techtarget)

"Data integration is the process of bringing data from disparate sources together to provide users with a unified view. The premise of data integration is to make data more freely available and easier to consume and process by systems and users." (Tibco) [source]

"Data integration is the process of retrieving and combining data from different sources into a unified set of data. Data integration can be used to combine data for users, organizations, and applications." (kloudless)

"Data integration is the process of taking data from multiple disparate sources and collating it in a single location, such as a data warehouse. Once integrated, data can then be used for detailed analytics or to power other enterprise applications." (Xplenty) [source]

"Data integration is the process used to combine data from disparate sources into a unified view that can provide valuable and actionable information." (snowflake) [source]

"Data integration refers to the technical and business processes used to combine data from multiple sources to provide a unified, single view of the data." (OmiSci) [source]

"The discipline of data integration comprises the practices, architectural techniques and tools for achieving the consistent access and delivery of data across the spectrum of data subject areas and data structure types in the enterprise to meet the data consumption requirements of all applications and business processes." (Gartner)

⛏️Data Management: Data Cleaning/Cleansing (Definitions)

"A processing step where missing or inaccurate data is replaced with valid values." (Joseph P Bigus, "Data Mining with Neural Networks: Solving Business Problems from Application Development to Decision Support", 1996)

"The process of validating data prior to a data analysis or Data Mining. This includes both ensuring that the values of the data are valid for a particular attribute or variable (e.g., heights are all positive and in a reasonable range) and that the values for given records or set of records are consistent." (William J Raynor Jr., "The International Dictionary of Artificial Intelligence", 1999)

"The process of correcting errors or omissions in data. This is often part of the extraction, transformation, and loading (ETL) process of extracting data from a source system, usually before attempting to load it into a target system. This is also known as data scrubbing." (Sharon Allen & Evan Terry, "Beginning Relational Data Modeling" 2nd Ed., 2005)

"The removal of inconsistencies, errors, and gaps in source data prior to its incorporation into data warehouses or data marts to facilitate data integration and improve data quality." (Steve Williams & Nancy Williams, "The Profit Impact of Business Intelligence", 2007)

"Software used to identify potential data quality problems. For example, if a customer is listed multiple times in a customer database using variations of the spelling of his or her name, the data cleansing software ensures that each data element is consistent so there is no confusion. Such software is used to make corrections to help standardize the data." (Judith Hurwitz et al, "Service Oriented Architecture For Dummies" 2nd Ed., 2009)

"The process of reviewing and improving data to make sure it is correct, up to date, and not duplicated." (Tony Fisher, "The Data Asset", 2009)

"The process of correcting data errors to bring the level of data quality to an acceptable level for the information user needs." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"The act of detecting and removing and/or correcting data in a database. Also called data scrubbing." (Craig S Mullins, "Database Administration: The Complete Guide to DBA Practices and Procedures", 2012)

"Synonymous with data fixing or data correcting, data cleaning is the process by which errors, inexplicable anomalies, and missing values are somehow handled. There are three options for data cleaning: correcting the error, deleting the error, or leaving it unchanged." (Jules H Berman, "Principles of Big Data: Preparing, Sharing, and Analyzing Complex Information", 2013)

"The process of detecting, removing, or correcting incorrect data." (Evan Stubbs, "Delivering Business Analytics: Practical Guidelines for Best Practice", 2013)

"The process of finding and fixing errors and inaccuracies in data" (Daniel Linstedt & W H Inmon, "Data Architecture: A Primer for the Data Scientist", 2014)

"The process of removing corrupt, redundant, and inaccurate data in the data governance process." (Robert F Smallwood, "Information Governance: Concepts, Strategies, and Best Practices", 2014) 

"The process of eliminating inaccuracies, irregularities, and discrepancies from data." (Jim Davis & Aiman Zeid, "Business Transformation", 2014)

"The process of reviewing and revising data in order to delete duplicates, correct errors, and provide consistency." (Jason Williamson, "Getting a Big Data Job For Dummies", 2015)

"the processes of identifying and resolving potential data errors." (Meredith Zozus, "The Data Book: Collection and Management of Research Data", 2017)

"A sub-process in data preprocessing, where we remove punctuation, stop words, etc. from the text." (Neha Garg & Kamlesh Sharma, "Machine Learning in Text Analysis", 2020)

"Processing a dataset to make it easier to consume. This may involve fixing inconsistencies and errors, removing non-machine-readable elements such as formatting, using standard labels for row and column headings, ensuring that numbers, dates, and other quantities are represented appropriately, conversion to a suitable file format, reconciliation of labels with another dataset being used (see data integration)." (Open Data Handbook) 

"The process of detecting and correcting faulty records, leading to highly accurate BI-informed decisions, as enormous databases and rapid acquisition of data can lead to inaccurate or faulty data that impacts the resulting BI and analysis. Correcting typographical errors, de-duplicating records, and standardizing syntax are all examples of data cleansing." (Insight Software)

"Transforming data in its native state to a pre-defined standardized format using vendor software." (Solutions Review)

"Data cleansing is the effort to improve the overall quality of data by removing or correcting inaccurate, incomplete, or irrelevant data from a data system.  […] Data cleansing techniques are usually performed on data that is at rest rather than data that is being moved. It attempts to find and remove or correct data that detracts from the quality, and thus the usability, of data. The goal of data cleansing is to achieve consistent, complete, accurate, and uniform data." (Informatica) [source]

"Data cleansing is the process of modifying data to improve accuracy and quality." (Xplenty) [source]

"Data cleaning is the process of preparing data for analysis by removing or modifying data that is incorrect, incomplete, irrelevant, duplicated, or improperly formatted." (Sisense) [source]

"Data Cleansing (or Data Scrubbing) is the action of identifying and then removing or amending any data within a database that is: incorrect, incomplete, duplicated." (experian) [source]

"Data cleansing, or data scrubbing, is the process of detecting and correcting or removing inaccurate data or records from a database. It may also involve correcting or removing improperly formatted or duplicate data or records. Such data removed in this process is often referred to as 'dirty data'. Data cleansing is an essential task for preserving data quality." (Teradata) [source]

"Data scrubbing, also called data cleansing, is the process of amending or removing data in a database that is incorrect, incomplete, improperly formatted, or duplicated." (Techtarget) [source]

"the process of reviewing and revising data in order to delete duplicates, correct errors and provide consistency." (Analytics Insight)

21 February 2017

⛏️Data Management: Validity (Definitions)

"A characteristic of the data collected that indicates they are sound and accurate." (Teri Lund & Susan Barksdale, "10 Steps to Successful Strategic Planning", 2006)

"Implies that the test measures what it is supposed to." (Robert McCrie, "Security Operations Management" 2nd Ed., 2006)

"The determination that values in the field are or are not within a set of allowed or valid values. Measured as part of the Data Integrity Fundamentals data quality dimension." (Danette McGilvray, "Executing Data Quality Projects", 2008)

"A data quality dimension that reflects the confirmation of data items to their corresponding value domains, and the extent to which non-confirmation of certain items affects fitness to use. For example, a data item is invalid if it is defined to be integer but contains a non-integer value, linked to a finite set of possible values but contains a value not included in this set, or contains a NULL value where a NULL is not allowed." (G Shankaranarayanan & Adir Even, "Measuring Data Quality in Context", 2009)

"An aspect of data quality consisting in its steadiness despite the natural process of data obsolescence increasing in time." (Juliusz L Kulikowski, "Data Quality Assessment", 2009)

"An inherent quality characteristic that is a measure of the degree of conformance of data to its domain values and business rules." (David C Hay, "Data Model Patterns: A Metadata Map", 2010)

"Validity is a dimension of data quality, defined as the degree to which data conforms to stated rules. As used in the DQAF, validity is differentiated from both accuracy and correctness. Validity is the degree to which data conform to a set of business rules, sometimes expressed as a standard or represented within a defined data domain." (Laura Sebastian-Coleman, "Measuring Data Quality for Ongoing Improvement ", 2012)

"Validity is defined as the extent to which data corresponds to reference tables, lists of values from golden sources documented in metadata, value ranges, etc." (Rajesh Jugulum, "Competing with High Quality Data", 2014)

"the state of consistency between a measurement and the concept that a researcher intended to measure." (Meredith Zozus, "The Data Book: Collection and Management of Research Data", 2017)

[semantic validity:] "The compliance of attribute data to rules regarding consistency and truthfulness of association." (O Sami Saydjari, "Engineering Trustworthy Systems: Get Cybersecurity Design Right the First Time", 2018)

[syntactic validity:] "The compliance of attribute data to format and grammar rules." (O Sami Saydjari, "Engineering Trustworthy Systems: Get Cybersecurity Design Right the First Time", 2018)

"Validity is a data quality dimension that refers to information that doesn’t conform to a specific format or doesn’t follow business rules." (Precisely) [source]

⛏️Data Management: Master Data Management [MDM] (Definitions)

"The set of disciplines and methods to ensure the currency, meaning, and quality of a company’s reference data that is shared across various systems and organizations." (Jill Dyché & Evan Levy, "Customer Data Integration", 2006)

"The framework of processes and technologies aimed at creating and maintaining an authoritative, reliable, sustainable, accurate, and secure data environment that represents a “single version of truth,” an accepted system of record used both intra- and interenterprise across a diverse set of application systems, lines of business, and user communities. (Alex Berson & Lawrence Dubov, "Master Data Management and Customer Data Integration for a Global Enterprise", 2007)

"Centralized facilities designed to hold master copies of shared entities, such as customers or products. MDM systems are meant to support transaction systems and usually have some means to reconcile different sources for the same attribute." (Ralph Kimball, "The Data Warehouse Lifecycle Toolkit", 2008)

"Processes that control management of master data values to enable consistent, shared, contextual use across systems, of the most accurate, timely, and relevant version of truth about essential business entities." (DAMA International, "The DAMA Guide to the Data Management Body of Knowledge" 1st Ed., 2009)

"The guiding principles and technology for maintaining data in a manner that can be shared across various systems and departments throughout an organization." (Tony Fisher, "The Data Asset", 2009)

"The processes and tools to help an organization consistently define and manage core reference or descriptive data across the organization. This may involve providing a centralized view of the data to ensure that its use for all business processes is consistent and accurate." (Laura Reeves, "A Manager's Guide to Data Warehousing", 2009)

"Master Data Management comprises a set of processes and tools that consistently define and manage the nontransactional data entities of an organization such as Customers and Products." (Paulraj Ponniah, "Data Warehousing Fundamentals for IT Professionals", 2010)

"A discipline that resolves master data to maintain the golden record, the holistic and panoramic view of master entities and relationships, and the benchmark for master data that can be used across the enterprise, and sometimes between enterprises to facilitate data exchanges." (Alex Berson & Lawrence Dubov, "Master Data Management and Data Governance", 2010)

"A system and services for the single, authoritative source of truth of master data for an enterprise." (Martin Oberhofer et al, "The Art of Enterprise Information Architecture", 2010)

"MDM is a set of disciplines, processes, and technologies for ensuring the accuracy, completeness, timeliness, and consistency of multiple domains of enterprise data across applications, systems, and databases, and across multiple business processes, functional areas, organizations, geographies, and channels." (Dan Power, "Moving Master Data Management into the Cloud", 2010)

"The processes and tools to help an organization consistently define and manage core reference or descriptive data " (Martin Oberhofer et al, "The Art of Enterprise Information Architecture", 2010)

"The set of codes and structures that identify and organize data, such as customer numbers, employee IDs, and general ledger account numbers." (Janice M Roehl-Anderson, "IT Best Practices for Financial Managers", 2010)

"An information quality activity in which the data elements that are used by multiple systems in an organization are identified, managed, and controlled at the enterprise level." (John R Talburt, "Entity Resolution and Information Quality", 2011)

"Data that is key to the operation of a business, such as data about customers, suppliers, partners, products, and materials." (Brenda L Dietrich et al, "Analytics Across the Enterprise", 2014)

"In business, master data management comprises the processes, governance, policies, standards, and tools that consistently define and manage the critical data of an organization to provide a single point of reference." (Keith Holdaway, "Harness Oil and Gas Big Data with Analytics", 2014)

"The data that describes the important details of a business subject area such as customer, product, or material across the organization. Master data allows different applications and lines of business to use the same definitions and data regarding the subject area. Master data gives an accurate, 360-degree view of the business subject." (Jim Davis & Aiman Zeid, "Business Transformation: A Roadmap for Maximizing Organizational Insights", 2014)

"A new trend in IT, Master Data Management is composed of processes and tools that ultimately help an organization define and manage its master data." (Andrew Pham et al, "From Business Strategy to Information Technology Roadmap", 2016)

'It is a technology-enabled discipline in which business and IT work together to ensure the uniformity, accuracy, stewardship, semantic consistency and accountability of the enterprise’s official shared master data assets. Master data is the consistent and uniform set of identifiers and extended attributes that describes the core entities of the enterprise including customers, prospects, citizens, suppliers, sites, hierarchies and chart of accounts." (Richard T Herschel, "Business Intelligence", 2019)

"The most critical data is called master data and the companioned discipline of master data management, which is about making the master data within the organization accessible, secure, transparent, and trustworthy." (Piethein Strengholt, "Data Management at Scale", 2020)

"An umbrella term that incorporates processes, policies, standards, tools and governance that define and manage all of an organization’s critical data in order to formulate one point of reference." (Solutions Review)

"The technology, tools, and processes required to create and maintain consistent and accurate lists of master data of an organization." (Microsoft)

"Master data management solutions provide the capabilities to create the unique and qualified reference of shared enterprise data, such as customer, product, supplier, employee, site, asset, and organizational data." (Forrester)

"Master data management (MDM) is the effort made by an organization to create one single master reference source for all critical business data, leading to fewer errors and less redundancy in business processes." (Informatica) [source]

"Master data management (MDM) is the process of defining, managing, and making use of an organization’s master data so it is visible and accessible via a single reference point." (MuleSoft) [source]

"Master data management (MDM) is the process of making sure an organization is always working with, and making decisions based on, one version of current, ‘true’ data - often referred to as a 'golden record'." (Talend) [source]
Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.