28 February 2018

🔬Data Science: Inference (Definitions)

"Drawing some form of conclusion about a measurable functional response based on representative or sample experimental data. Sample size, uncertainty, and the laws of probability play a major role in making inferences." (Clyde M Creveling, "Six Sigma for Technical Processes: An Overview for R Executives, Technical Leaders, and Engineering Managers", 2006)

"Reasoning from known propositions." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"In general, inference is the act or process of deriving new facts from facts known or assumed to be true. In Artificial Intelligence, researchers develop automated inference engines to automate human inference." (Michael Fellmann et al, "Supporting Semantic Verification of Process Models", 2012)

[statistical inference:] "A method that uses sample data to draw conclusions about a population." (Geoff Cumming, "Understanding The New Statistics", 2013)

"Any conclusion drawn on the basis of some set of information. In research, we draw inferences on the basis of empirical data we collect and ideas we construct." (K  N Krishnaswamy et al, "Management Research Methodology: Integration of Principles, Methods and Techniques", 2016)

[causal inference:] "Conclusion that changes in the independent variable resulted in a change in the dependent variable. It may be drawn only if all potential confounding variables are properly controlled." (K  N Krishnaswamy et al, "Management Research Methodology: Integration of Principles, Methods and Techniques", 2016)

"The process of using a probabilistic model to answer a query, given evidence." (Avi Pfeffer, "Practical Probabilistic Programming", 2016)

[inductive inference] "A machine learning method for learning the rules that produced the actual data." (David Natingga, "Data Science Algorithms in a Week" 2nd Ed., 2018)

"The ability to derive information not explicitly available." (Shon Harris & Fernando Maymi, "CISSP All-in-One Exam Guide" 8th Ed., 2018)

27 February 2018

🔬Data Science: Data Modeling (Definitions)

"The task of developing a data model that represents the persistent data of some enterprise." (Keith Gordon, "Principles of Data Management", 2007)

"An analysis and design method, building data models to 
a) define and analyze data requirements,
b) design logical and physical data structures that support these requirements, and
c) define business and technical meta-data." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"The process of creating a data model by applying formal data model descriptions using data modeling techniques." (Christian Galinski & Helmut Beckmann, "Concepts for Enhancing Content Quality and eAccessibility: In General and in the Field of eProcurement", 2012)

"The process of creating the abstract representation of a subject so that it can be studied more cheaply (a scale model of an airplane in a wind tunnel), at a particular moment in time (weather forecasting), or manipulated, modified, and altered without disrupting the original (economic model)." (George Tillmann, "Usage-Driven Database Design: From Logical Data Modeling through Physical Schmea Definition", 2017)

"A method used to define and analyze the data requirements needed to support an entity’s business processes, defining the relationship between data elements and structures." (Solutions Review)

"A method used to define and analyze data requirements needed to support the business functions of an enterprise. These data requirements are recorded as a conceptual data model with associated data definitions. Data modeling defines the relationships between data elements and data structures. (Microstrategy)

"A method used to define and analyze data requirements needed to support the business functions of an enterprise. These data requirements are recorded as a conceptual data model with associated data definitions. Data modeling defines the relationships between data elements and structures." (Information Management)

"Refers to the process of defining, analyzing, and structuring data within data models." (Insight Software)

"Data modeling is a way of mapping out and visualizing all the different places that a software or application stores information, and how these sources of data will fit together and flow into one another." (Sisense) [source]

"Data modeling is the process of documenting a complex software system design as an easily understood diagram, using text and symbols to represent the way data needs to flow. The diagram can be used to ensure efficient use of data, as a blueprint for the construction of new software or for re-engineering a legacy application." (Techtarget) [source]

24 February 2018

💎SQL Reloaded: Misusing Views and Pseudo-Constants

Views as virtual tables can be misused to replace tables in certain circumstances, either by storing values within one or multiple rows, like in the below examples:

-- parameters for a BI solution
CREATE VIEW dbo.vLoV_Parameters
AS
SELECT Cast('ABC' as nvarchar(20)) AS DataAreaId
 , Cast(GetDate() as Date) AS CurrentDate 
 , Cast(100 as int) AS BatchCount 

GO

SELECT *
FROM dbo.vLoV_Parameters

GO

-- values for a dropdown 
 CREATE VIEW dbo.vLoV_DataAreas
 AS
 SELECT Cast('ABC' as nvarchar(20)) AS DataAreaId
 , Cast('Company ABC' as nvarchar(50)) AS Description 
 UNION ALL
 SELECT 'XYZ' DataAreaId 
 , 'Company XYZ'

GO

SELECT *
FROM dbo.vLoV_DataAreas

GO

These solutions aren’t elegant, and typically not recommended because they go against one of the principles of good database design, namely “data belong in tables”, though they do the trick when needed. Personally, I used them only in a handful of cases, e.g. when it wasn’t allowed to create tables, when it was needed testing something for a short period of time, or when there was some overhead of creating a table for 2-3 values. Because of their scarce use, I haven’t given them too much thought, not until I discovered Jared Ko’s blog posting on pseudo-constants. He considers the values from the first view as pseudo-constants, and advocates for their use especially for easier dependency tracking, easier code refactoring, avoiding implicit data conversion and easier maintenance of values.

All these are good reasons to consider them, therefore I tried to take further the idea to see if it survives a reality check. For this I took Dynamics AX as testing environment, as it makes extensive use of enumerations (aka base enums) to store list of values needed allover through the application. Behind each table there are one or more enumerations, the tables storing master data abounding of them.  For exemplification let’s consider InventTrans, table that stores the inventory transactions, the logic that governs the receipt and issued transactions are governed by three enumerations: StatusIssue, StatusReceipt and Direction.

-- Status Issue Enumeration 
 CREATE VIEW dbo.vLoV_StatusIssue
 AS
 SELECT cast(0 as int) AS None
 , cast(1 as int) AS Sold
 , cast(2 as int) AS Deducted
 , cast(3 as int) AS Picked
 , cast(4 as int) AS ReservPhysical
 , cast(5 as int) AS ReservOrdered
 , cast(6 as int) AS OnOrder
 , cast(7 as int) AS QuotationIssue

GO

-- Status Receipt Enumeration 
 CREATE VIEW dbo.vLoV_StatusReceipt
 AS
SELECT cast(0 as int) AS None
 , cast(1 as int) AS Purchased
 , cast(2 as int) AS Received
 , cast(3 as int) AS Registered
 , cast(4 as int) AS Arrived
 , cast(5 as int) AS Ordered
 , cast(6 as int) AS QuotationReceipt

GO

-- Inventory Direction Enumeration 
 CREATE VIEW dbo.vLoV_InventDirection
 AS
 SELECT cast(0 as int) AS None
 , cast(1 as int) AS Receipt
 , cast(2 as int) AS Issue

 To see these views at work let’s construct the InventTrans table on the fly:

-- creating an ad-hoc table  
 SELECT *
 INTO  dbo.InventTrans
 FROM (VALUES (1, 1, 0, 2, -1, 'A0001')
 , (2, 1, 0, 2, -10, 'A0002')
 , (3, 2, 0, 2, -6, 'A0001')
 , (4, 2, 0, 2, -3, 'A0002')
 , (5, 3, 0, 2, -2, 'A0001')
 , (6, 1, 0, 1, 1, 'A0001')
 , (7, 0, 1, 1, 50, 'A0001')
 , (8, 0, 2, 1, 100, 'A0002')
 , (9, 0, 3, 1, 30, 'A0003')
 , (10, 0, 3, 1, 20, 'A0004')
 , (11, 0, 1, 2, 10, 'A0001')
 ) A(TransId, StatusIssue, StatusReceipt, Direction, Qty, ItemId)


 Here are two sets of examples using literals vs. pseudo-constants:

--example issued with literals 
SELECT top 100 ITR.*
 FROM dbo.InventTrans ITR
 WHERE ITR.StatusIssue = 1 
   AND ITR.Direction = 2

GO
 --example issued with pseudo-constants
 SELECT top 100 ITR.*
 FROM dbo.InventTrans ITR
      JOIN dbo.vLoV_StatusIssue SI
        ON ITR.StatusIssue = SI.Sold
      JOIN dbo.vLoV_InventDirection ID
        ON ITR.Direction = ID.Issue

GO

--example receipt with literals 
 SELECT top 100 ITR.*
 FROM dbo.InventTrans ITR
 WHERE ITR.StatusReceipt= 1
   AND ITR.Direction = 1

GO

--example receipt with pseudo-constants
 SELECT top 100 ITR.*
 FROM dbo.InventTrans ITR
      JOIN dbo.vLoV_StatusReceipt SR
        ON ITR.StatusReceipt= SR.Purchased
      JOIN dbo.vLoV_InventDirection ID
        ON ITR.Direction = ID.Receipt

 
As can be seen the queries using pseudo-constants make the code somehow readable, though the gain is only relative, each enumeration implying an additional join. In addition, when further business tables are added to the logic (e.g. items, purchases or sales orders)  it complicates the logic, making it more difficult to separate the essential from nonessential. Imagine a translation of the following query:

-- complex query 
  SELECT top 100 ITR.*
  FROM dbo.InventTrans ITR
              <several tables here>
  WHERE ((ITR.StatusReceipt<=3 AND ITR.Direction = 1)
    OR (ITR.StatusIssue<=3 AND ITR.Direction = 2))
    AND (<more constraints here>)


The more difficult the constraints in the WHERE clause, the more improbable is a translation of the literals into pseudo-constraints. Considering that an average query contains 5-10 tables, each of them with 1-3 enumerations, the queries would become impracticable by using pseudo-constants and quite difficult to troubleshoot their execution plans.

The more I’m thinking about, an enumeration data type as global variable in SQL Server (like the ones available in VB) would be more than welcome, especially because values are used over and over again through the queries. Imagine, for example, the possibility of writing code as follows:

-- hypothetical query
SELECT top 100 ITR.*
FROM dbo.InventTrans ITR
WHERE ITR.StatusReceipt = @@StatusReceipt .Purchased
  AND ITR.Direction = @@InventDirection.Receipt

From my point of view this would make the code more readable and easier to maintain. Instead, in order to make the code more readable, one’s usually forced to add some comments in the code. This works as well, though the code can become full of comments.

-- query with commented literals
SELECT top 100 ITR.*
FROM dbo.InventTrans ITR
WHERE ITR.StatusReceipt <=3  Purchased, Received, Registered 
   AND ITR.Direction = 1-- Receip

In conclusion, pseudo-constants’ usefulness is only limited, and their usage is  against developers’ common sense, however a data type in SQL Server with similar functionality would make code more readable and easier to maintain.


PS: It is possible to simulate an enumeration data type in tables’ definition by using a CHECK constraint.

19 February 2018

🔬Data Science: Data Exploration (Definitions)

Data exploration: "The process of examining data in order to determine ranges and patterns within the data." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

Data Exploration: "The part of the data science process where a scientist will ask basic questions that helps her understand the context of a data set. What you learn during the exploration phase will guide more in-depth analysis later. Further, it helps you recognize when a result might be surprising and warrant further investigation." (KDnuggets)

"Data exploration is the first step of data analysis used to explore and visualize data to uncover insights from the start or identify areas or patterns to dig into more." (Tibco) [source]

"Data exploration is the initial step in data analysis, where users explore a large data set in an unstructured way to uncover initial patterns, characteristics, and points of interest. This process isn’t meant to reveal every bit of information a dataset holds, but rather to help create a broad picture of important trends and major points to study in greater detail." (Sisense) [source]

"Data exploration is the process through which a data analyst investigates the characteristics of a dataset to better understand the data contained within and to define basic metadata before building a data model. Data exploration helps the analyst choose the most appropriate tool for data processing and analysis, and leverages the innate human ability to recognize patterns in data that may not be captured by analytics tools." (Qlik) [source]

"Data exploration provides a first glance analysis of available data sources. Rather than trying to deliver precise insights such as those that result from data analytics, data exploration focuses on identifying key trends and significant variables." (Xplenty) [source]

15 February 2018

🔬Data Science: Data Preparation (Definitions)

Data preparation: "The process which involves checking or logging the data in; checking the data for accuracy; entering the data into the computer; transforming the data; and developing and documenting a database structure that integrates the various measures. This process includes preparation and assignment of appropriate metadata to describe the product in human readable code/format." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"Data Preparation describes a range of processing activities that take place in order to transform a source of data into a format, quality and structure suitable for further analysis or processing. It is often referred to as Data Pre-Processing due to the fact it is an activity that organises the data for a follow-on processing stage." (experian) [source]

"Data preparation [also] called data wrangling, it’s everything that is concerned with the process of getting your data in good shape for analysis. It’s a critical part of the machine learning process." (RapidMiner) [source]

"Data preparation is an iterative-agile process for exploring, combining, cleaning and transforming raw data into curated datasets for self-service data integration, data science, data discovery, and BI/analytics." (Gartner)

"Data preparation is the process of cleaning and transforming raw data prior to processing and analysis. It is an important step prior to processing and often involves reformatting data, making corrections to data and the combining of data sets to enrich data." (Talend) [source]

🔬Data Science: Data Augmentation (Definitions)

"1.The process of adding to something to make it more or greater than the original. 2.In logic, a relationship where if X leads to Y, then XZ will lead to YZ." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"A technique for improving the performance of a model by enriching the training data, e.g. by generating additional instances of minority classes."  (Vincent Karas & Björn W Schuller, "Deep Learning for Sentiment Analysis: An Overview and Perspectives", 2021)



🔬Data Science: Feature Extraction (Definitions)

"A technique that attempts to combine or transform predictors to make clear the information contained within them. Feature extraction methods include factor analysis, principal components analysis, correspondence analysis, multidimensional scaling, partial least square methods, and singular value decomposition." (Robert Nisbet et al, "Handbook of statistical analysis and data mining applications", 2009)

"Extracting or deriving some useful information from the initially obtained data." (Shouvik Chakraborty & Kalyani Mali, "An Overview of Biomedical Image Analysis From the Deep Learning Perspective", 2020)

"A process of finding features of words and map them to vector space." (Neha Garg & Kamlesh Sharma, "Machine Learning in Text Analysis", 2020)

"A digital signal processing algorithm, which extracts distinctive values from the input signal." (Andrej Zgank et al, "Embodied Conversation: A Personalized Conversational HCI Interface for Ambient Intelligence", 2021)

"Feature extraction is a procedure in dimensionality reduction of extracting principal variables (features) from some random variables under consideration, usually achieved by extracting one principal variable (feature) as mapping from multiple random variables." (Usama A Khan & Josephine M Namayanja, "Reevaluating Factor Models: Feature Extraction of the Factor Zoo", 2021)

🔬Data Science: Feature Selection (Definitions)

"A method by which to decide on which features (columns) to keep in the analysis that will be done by the data mining algorithms. One of the first things to be done in a data mining project; this uncovers the most important variables among the set of predictor variables. Many of the predictor variables in a data set may not really be important for making an accurate predictive model, and only dilute/reduce the accuracy score of the model if included." (Robert Nisbet et al, "Handbook of statistical analysis and data mining applications", 2009)

"The process a cybersecurity engineer uses to choose the features in which a given attack may manifest." (O Sami Saydjari, "Engineering Trustworthy Systems: Get Cybersecurity Design Right the First Time", 2018)

"Feature selection is the process of selecting important principal variables (features) from some random variables under consideration, usually achieved by selecting a principal variable (feature) as one of the random variables." (Usama A Khan & Josephine M Namayanja, "Reevaluating Factor Models: Feature Extraction of the Factor Zoo", 2021)

"It is used to select appropriate features from the available data for improving efficiency of machine learning algorithms." (Gunjan Ansari et al, "Natural Language Processing in Online Reviews", 2021)

🔬Data Science: Data Visualization (Definitions)

"Technique for presentation and analysis of data through visual objects, such as graphs, charts, images, and specialized tabular formats." (Paulraj Ponniah, "Data Warehousing Fundamentals", 2001)

"Technique for presentation and analysis of data through visual objects, such as graphs, charts, images, and specialized tabular formats." (Paulraj Ponniah, "Data Warehousing Fundamentals for IT Professionals", 2010) 

"Visual representation of data, aiming to convey as much information as possible through visual processes." (Alfredo Vellido & Iván Olie, "Clustering and Visualization of Multivariate Time Series", 2010)

"Techniques for graphical representation of trends, patterns and other information." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"Information abstracted in a schematic form to provide visual insights into sets of data. Data visualization enables us to go from the abstract numbers in a computer program (ones and zeros) to visual interpretation of data. Text visualization means converting textual information into graphic representation, so we can see information without having to read the data, as tables, histograms, pie or bar charts, or Cartesian coordinates." (Anna Ursyn, "Visualization as Communication with Graphic Representation", 2015)

"[...] data visualization [is] a tool that, by applying perceptual mechanisms to the visual representation of abstract quantitative data, facilitates the search for relevant shapes, order, or exceptions." (Jorge Camões, "Data at Work: Best practices for creating effective charts and information graphics in Microsoft Excel", 2016)

"Presenting data and summary information using graphics, animation, and three-dimensional displays. Tools for visually displaying information and relationships often using dynamic and interactive graphics." (Daniel J Power & Ciara Heavin, "Decision Support, Analytics, and Business Intelligence" 3rd Ed., 2017)

"Data Visualization is a way of representing the data collected in the form of figures and diagrams like tables, charts, graphs in order to make the data for common man more easily understandable." (Kirti R Bhatele, "Data Analysis on Global Stratification", 2020)

"Techniques for turning data into information by using the high capacity of the human brain to visually recognize patterns and trends. There are many specialized techniques designed to make particular kinds of visualization easy." (Information Management)

"The art of communicating meaningful data visually. This can involve infographics, traditional plots, or even full data dashboards." (KDnuggets)

"The practice of structuring and arranging data within a visual context to help users understand it. Patterns and trends that might be unrecognizable to the layman in text-based data can be easily viewed and digested by end users with the help of data visualization software." (Insight Software)

"Data visualization enables people to easily uncover actionable insights by presenting information and data in graphical, and often interactive graphs, charts, and maps." (Qlik) [source]

"Data visualization is the graphical representation of data to help people understand context and significance. Interactive data visualization enables companies to drill down to explore details, identify patterns and outliers, and change which data is processed and/or excluded." (Tibco) [source]

"Data visualization is the practice of translating information into a visual context, such as a map or graph, to make data easier for the human brain to understand and pull insights from." (Techtarget) [source]

"Data visualization is the process of graphically illustrating data sets to discover hidden patterns, trends, and relationships in order to develop key insights. Data visualization uses data points as a basis for the creation of graphs, charts, plots, and other images." (Talend) [source]

"Data visualization is the use of graphics to represent data. The purpose of these graphics is to quickly and concisely communicate the most important insights produced by data analytics." (Xplenty) [source]

🔬Data Science: Optimization (Definitions)

"Term used to describe analytics that calculate and determine the most ideal scenario to meet a specific target. Optimization procedures analyze each scenario and supply a score. An optimization analytic can run through hundreds, even thousands, of scenarios and rank each one based on a target that is being achieved." (Brittany Bullard, "Style and Statistics", 2016)

"Optimization is the process of finding the most efficient algorithm for a given task." (Edward T Chen, "Deep Learning and Sustainable Telemedicine", 2020)

🔬Data Science: Speech Recognition (Definitions)

"Automatic decoding of a sound pattern into phonemes or words." (Guido Deboeck & Teuvo Kohonen (Eds), "Visual Explorations in Finance with Self-Organizing Maps" 2nd Ed., 2000)

"Speech recognition is a process through which machines convert words or phrases spoken into a machine-readable format." (Hari K Kondaveeti et al, "Deep Learning Applications in Agriculture: The Role of Deep Learning in Smart Agriculture", 2021)

13 February 2018

🔬Data Science: Data Model (Definitions)

"A model that describes in an abstract way how data is represented in an information system. A data model can be a part of ontology, which is a description of how data is represented in an entire domain" (Mark Olive, "SHARE: A European Healthgrid Roadmap", 2009)

"Description of the node structure that defines its entities, fields and relationships." (Roberto Barbera et al, "gLibrary/DRI: A Grid-Based Platform to Host Muliple Repositories for Digital Content", 2009)

"An abstract model that describes how data are presented, organized and related to." (Ali M Tanyer, "Design and Evaluation of an Integrated Design Practice Course in the Curriculum of Architecture", 2010)

"The first of a series of data models that more closely represented the real world, modeling both data and their relationships in a single structure known as an object. The SDM, published in 1981, was developed by M. Hammer and D. McLeod." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management 9th Ed", 2011)

"The way of organizing and representing data is data model." (Uma V & Jayanthi G, "Spatio-Temporal Hot Spot Analysis of Epidemic Diseases Using Geographic Information System for Improved Healthcare", 2019)

12 February 2018

🔬Data Science: Correlation (Definitions)

[correlation coefficient:] "A measure to determine how closely a scatterplot of two continuous variables falls on a straight line." (Glenn J Myatt, "Making Sense of Data: A Practical Guide to Exploratory Data Analysis and Data Mining", 2006)

"A metric that measures the linear relationship between two process variables. Correlation describes the X and Y relationship with a single number (the Pearson’s Correlation Coefficient (r)), whereas regression summarizes the relationship with a line - the regression line." (Lynne Hambleton, "Treasure Chest of Six Sigma Growth Methods, Tools, and Best Practices", 2007)

[correlation coefficient:] "A measure of the degree of correlation between the two variables. The range of values it takes is between −1 and +1. A negative value of r indicates an inverse relationship. A positive value of r indicates a direct relationship. A zero value of r indicates that the two variables are independent of each other. The closer r is to +1 and −1, the stronger the relationship between the two variables." (Jae K Shim & Joel G Siegel, "Budgeting Basics and Beyond", 2008)

"The degree of relationship between business and economic variables such as cost and volume. Correlation analysis evaluates cause/effect relationships. It looks consistently at how the value of one variable changes when the value of the other is changed. A prediction can be made based on the relationship uncovered. An example is the effect of advertising on sales. A degree of correlation is measured statistically by the coefficient of determination (R-squared)." (Jae K Shim & Joel G Siegel, "Budgeting Basics and Beyond", 2008)

"A figure quantifying the correlation between risk events. This number is between negative one and positive one." (Annetta Cortez & Bob Yehling, "The Complete Idiot's Guide® To Risk Management", 2010)

"A mechanism used to associate messages with the correct workflow service instance. Correlation is also used to associate multiple messaging activities with each other within a workflow." (Bruce Bukovics, "Pro WF: Windows Workflow in .NET 4", 2010)

"Correlation is sometimes used informally to mean a statistical association between two variables, or perhaps the strength of such an association. Technically, the correlation can be interpreted as the degree to which a linear relationship between the variables exists (i.e., each variable is a linear function of the other) as measured by the correlation coefficient." (Herbert I Weisberg, "Bias and Causation: Models and Judgment for Valid Comparisons", 2010)

"The degree of relationship between two variables; in risk management, specifically the degree of relationship between potential risks." (Annetta Cortez & Bob Yehling, "The Complete Idiot's Guide® To Risk Management", 2010)

"A predictive relationship between two factors, such that when one factor changes, you can predict the nature, direction and/or amount of change in the other factor. Not necessarily a cause-and-effect relationship." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"Organizing and recognizing one related event threat out of several reported, but previously distinct, events." (Mark Rhodes-Ousley, "Information Security: The Complete Reference" 2nd Ed., 2013)

"Association in the values of two or more variables." (Meta S Brown, "Data Mining For Dummies", 2014)

[correlation coefficient:] "A statistic that quantifies the degree of association between two or more variables. There are many kinds of correlation coefficients, depending on the type of data and relationship predicted." (K  N Krishnaswamy et al, "Management Research Methodology: Integration of Principles, Methods and Techniques", 2016)

"The degree of association between two or more variables." (K  N Krishnaswamy et al, "Management Research Methodology: Integration of Principles, Methods and Techniques", 2016)

"A statistical measure that indicates the extent to which two variables are related. A positive correlation indicates that, as one variable increases, the other increases as well. For a negative correlation, as one variable increases, the other decreases." (Jonathan Ferrar et al, "The Power of People: Learn How Successful Organizations Use Workforce Analytics To Improve Business Performance", 2017)

11 February 2018

🔬Data Science: Parametric Estimating (Definitions)

[parametric:] "A statistical procedure that makes assumptions concerning the frequency distributions." (Glenn J Myatt, "Making Sense of Data: A Practical Guide to Exploratory Data Analysis and Data Mining", 2006)

"A simplified mathematical description of a system or process, used to assist calculations and predictions. Generally speaking, parametric models calculate the dependent variables of cost and duration on the basis of one or more variables." (Project Management Institute, "Practice Standard for Project Estimating", 2010)

"An estimating technique that uses a statistical relationship between historical data and other variables (e.g., square footage in construction, lines of code in software development) to calculate an estimate for activity parameters, such as scope, cost, budget, and duration. An example for the cost parameter is multiplying the planned quantity of work to be performed by the historical cost per unit to obtain the estimated cost." (Project Management Institute, "Practice Standard for Project Estimating", 2010)

"A branch of statistics that assumes the data being examined comes from a variety of known probability distributions. In general, the tests sacrifice generalizability for speed of computation and precision, providing the requisite assumptions are met." (Evan Stubbs, "Delivering Business Analytics: Practical Guidelines for Best Practice", 2013)

"An estimating technique in which an algorithm is used to calculate cost or duration based on historical data and project parameters." (For Dummies, "PMP Certification All-in-One For Dummies" 2nd Ed., 2013)

"Inferential statistical procedures that rely on sample statistics to draw inferences about population parameters, such as mean and variance." (K  N Krishnaswamy et al, "Management Research Methodology: Integration of Principles, Methods and Techniques", 2016)

🔬Data Science: Non-Parametric Tests (Definitions)

[nonparametric:] "A statistical procedure that does not require a normal distribution of the data." (Glenn J Myatt, "Making Sense of Data: A Practical Guide to Exploratory Data Analysis and Data Mining", 2006)

"A branch of statistics that makes no assumptions on the underlying distributions of the data being examined. In general, the tests are far more generalizable but sacrifice precision and power." (Evan Stubbs, "Delivering Business Analytics: Practical Guidelines for Best Practice", 2013)

"Inferential statistical procedures that do not rely on estimating population parameters such as the mean and variance." (K  N Krishnaswamy et al, "Management Research Methodology: Integration of Principles, Methods and Techniques", 2016)

"A family of methods which makes no assumptions about the population distribution. Non-parametric methods most commonly work by ignoring the actual values, and, instead, analyzing only their ranks. This approach ensures that the test is not affected much by outliers, and does not assume any particular distribution. The clear advantage of non-parametric tests is that they do not require the assumption of sampling from a Gaussian population. When the assumption of Gaussian distribution does not hold, non-parametric tests have more power than parametric tests to detect differences." (Soheila Nasiri & Bijan Raahemi, "Non-Parametric Statistical Analysis of Rare Events in Healthcare", 2017)


🔬Data Science: Gaussian Distribution (Definitions)

"Represents a conventional scale for a normally distributed bell-shaped curve that has a central tendency of zero and a standard deviation of one unit, wherein the units are called sigma (σ)." (Lynne Hambleton, "Treasure Chest of Six Sigma Growth Methods, Tools, and Best Practices", 2007)

"Also called the standard normal distribution, is the normal distribution with mean zero and variance one." (Dimitrios G Tsalikakis et al, "Segmentation of Cardiac Magnetic Resonance Images", 2009)

"A normal distribution with the parameters μ = 0 and σ = 1. The random variable for this distribution is denoted by Z. The z-tables (values of the random variable Z and the corresponding probabilities) are widely used for normal distributions." (Peter Oakander et al, "CPM Scheduling for Construction: Best Practices and Guidelines", 2014)


🔬Data Science: K-nearest neighbors (Definitions)

"A modeling technique that assigns values to points based on the values of the k nearby points, such as average value, or most common value." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"A simple and popular classifier algorithm that assigns a class (in a preexisting classification) to an object whose class is unknown. [...] From a collection of data objects whose class is known, the algorithm computes the distances from the object of unknown class to k (a number chosen by the user) objects of known class. The most common class (i.e., the class that is assigned most often to the nearest k objects) is assigned to the object of unknown class." (Jules H Berman, "Principles of Big Data: Preparing, Sharing, and Analyzing Complex Information", 2013)

"A method used for classification and regression. Cases are analyzed, and class membership is assigned based on similarity to other cases, where cases that are similar (or 'near' in characteristics) are known as neighbors." (Brenda L Dietrich et al, "Analytics Across the Enterprise", 2014)

"A prediction method, which uses a function of the k most similar observations from the training set to generate a prediction, such as the mean." (Glenn J Myatt, "Making Sense of Data: A Practical Guide to Exploratory Data Analysis and Data Mining", 2006)

"K-Nearest Neighbors classification is an instance-based supervised learning method that works well with distance-sensitive data." (Matthew Kirk, "Thoughtful Machine Learning", 2015)

"An algorithm that estimates an unknown data item as being like the majority of the k-closest neighbors to that item." (David Natingga, "Data Science Algorithms in a Week" 2nd Ed., 2018)

"K-nearest neighbourhood is a algorithm which stores all available cases and classifies new cases based on a similarity measure. It is used in statistical estimation and pattern recognition." (Aman Tyagi, "Healthcare-Internet of Things and Its Components: Technologies, Benefits, Algorithms, Security, and Challenges", 2021)

10 February 2018

🔬Data Science: Data Mining (Definitions)

"The non-trivial extraction of implicit, previously unknown, and potentially useful information from data" (Frawley et al., "Knowledge discovery in databases: An overview", 1991)

"Data mining is the efficient discovery of valuable, nonobvious information from a large collection of data." (Joseph P Bigus,"Data Mining with Neural Networks: Solving business problems from application development to decision support", 1996)

"Data mining is the process of examining large amounts of aggregated data. The objective of data mining is to either predict what may happen based on trends or patterns in the data or to discover interesting correlations in the data." (Microsoft Corporation, "Microsoft SQL Server 7.0 Data Warehouse Training Kit", 2000)

"A data-driven approach to analysis and prediction by applying sophisticated techniques and algorithms to discover knowledge." (Paulraj Ponniah, "Data Warehousing Fundamentals", 2001)

"A class of undirected queries, often against the most atomic data, that seek to find unexpected patterns in the data. The most valuable results from data mining are clustering, classifying, estimating, predicting, and finding things that occur together. There are many kinds of tools that play a role in data mining. The principal tools include decision trees, neural networks, memory- and cased-based reasoning tools, visualization tools, genetic algorithms, fuzzy logic, and classical statistics. Generally, data mining is a client of the data warehouse." (Ralph Kimball & Margy Ross, "The Data Warehouse Toolkit" 2nd Ed., 2002)

"The discovery of information hidden within data." (William A Giovinazzo, "Internet-Enabled Business Intelligence", 2002)

"the process of extracting valid, authentic, and actionable information from large databases." (Seth Paul et al. "Preparing and Mining Data with Microsoft SQL Server 2000 and Analysis", 2002)

"Advanced analysis or data mining is the analysis of detailed data to detect patterns, behaviors, and relationships in data that were previously only partially known or at times totally unknown." (Margaret Y Chu, "Blissful Data", 2004)

"Analysis of detail data to discover relationships, patterns, or associations between values." (Margaret Y Chu, "Blissful Data ", 2004)

"An information extraction activity whose goal is to discover hidden facts contained in databases. Using a combination of machine learning, statistical analysis, modeling techniques, and database technology, data mining finds patterns and subtle relationships in data and infers rules that allow the prediction of future results." (Sharon Allen & Evan Terry, "Beginning Relational Data Modeling" 2nd Ed., 2005)

"the process of analyzing large amounts of data in search of previously undiscovered business patterns." (William H Inmon, "Building the Data Warehouse", 2005)

"A type of advanced analysis used to determine certain patterns within data. Data mining is most often associated with predictive analysis based on historical detail, and the generation of models for further analysis and query." (Jill Dyché & Evan Levy, "Customer Data Integration", 2006)

"Refers to the process of identifying nontrivial facts, patterns and relationships from large databases. The databases have often been put together for a different purpose from the data mining exercise." (Glenn J Myatt, "Making Sense of Data: A Practical Guide to Exploratory Data Analysis and Data Mining", 2006)

"Data mining is the process of discovering implicit patterns in data stored in data warehouse and using those patterns for business advantage such as predicting future trends." (S. Sumathi & S. Esakkirajan, "Fundamentals of Relational Database Management Systems", 2007)

"Digging through data (usually in a data warehouse or data mart) to identify interesting patterns." (Rod Stephens, "Beginning Database Design Solutions", 2008)

"Intelligently analyzing data to extract hidden trends, patterns, and information. Commonly used by statisticians, data analysts and Management Information Systems communities." (Craig F Smith & H Peter Alesso, "Thinking on the Web: Berners-Lee, Gödel and Turing", 2008)

"The process of extracting valid, authentic, and actionable information from large databases." (Darril Gibson, "MCITP SQL Server 2005 Database Developer All-in-One Exam Guide", 2008)

"The process of retrieving relevant data to make intelligent decisions." (Robert D Schneider & Darril Gibson, "Microsoft SQL Server 2008 All-in-One Desk Reference For Dummies", 2008)

"A process that minimally has four stages: (1) data preparation that may involve 'data cleaning' and even 'data transformation', (2) initial exploration of the data, (3) model building or pattern identification, and (4) deployment, which means subjecting new data to the 'model' to predict outcomes of cases found in the new data." (Robert Nisbet et al, "Handbook of statistical analysis and data mining applications", 2009)

"Automatically searching large volumes of data for patterns or associations." (Mark Olive, "SHARE: A European Healthgrid Roadmap", 2009)

"The use of machine learning algorithms to find faint patterns of relationship between data elements in large, noisy, and messy data sets, which can lead to actions to increase benefit in some form (diagnosis, profit, detection, etc.)." (Robert Nisbet et al, "Handbook of statistical analysis and data mining applications", 2009)

"A data-driven approach to analysis and prediction by applying sophisticated techniques and algorithms to discover knowledge." (Paulraj Ponniah, "Data Warehousing Fundamentals for IT Professionals", 2010) 

"A way of extracting knowledge from a database by searching for correlations in the data and presenting promising hypotheses to the user for analysis and consideration." (Toby J Teorey, "Database Modeling and Design" 4th Ed., 2010)

"The process of using mathematical algorithms (usually implemented in computer software) to attempt to transform raw data into information that is not otherwise visible (for example, creating a query to forecast sales for the future based on sales from the past)." (Ken Withee, "Microsoft Business Intelligence For Dummies", 2010)

"A process that employs automated tools to analyze data in a data warehouse and other sources and to proactively identify possible relationships and anomalies." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management" 9th Ed., 2011)

"Process of analyzing data from different perspectives and summarizing it into useful information (e.g., information that can be used to increase revenue, cuts costs, or both)." (Linda Volonino & Efraim Turban, "Information Technology for Management" 8th Ed., 2011)

"The process of sifting through large amounts of data using pattern recognition, fuzzy logic, and other knowledge discovery statistical techniques to identify previously unknown, unsuspected, and potentially meaningful data content relationships and trends." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"Data mining, a branch of computer science, is the process of extracting patterns from large data sets by combining statistical analysis and artificial intelligence with database management. Data mining is seen as an increasingly important tool by modern business to transform data into business intelligence giving an informational advantage." (T T Wong & Loretta K W Sze, "A Neuro-Fuzzy Partner Selection System for Business Social Networks", 2012)

"Field of analytics with structured data. The model inference process minimally has four stages: data preparation, involving data cleaning, transformation and selection; initial exploration of the data; model building or pattern identification; and deployment, putting new data through the model to obtain their predicted outcomes." (Gary Miner et al, "Practical Text Mining and Statistical Analysis for Non-structured Text Data Applications", 2012)

"The process of identifying commercially useful patterns or relationships in databases or other computer repositories through the use of advanced statistical tools." (Microsoft, "SQL Server 2012 Glossary", 2012)

"The process of exploring and analyzing large amounts of data to find patterns." (Marcia Kaufman et al, "Big Data For Dummies", 2013)

"An umbrella term for analytic techniques that facilitate fast pattern discovery and model building, particularly with large datasets." (Meta S Brown, "Data Mining For Dummies", 2014)

"Analysis of large quantities of data to find patterns such as groups of records, unusual records, and dependencies" (Daniel Linstedt & W H Inmon, "Data Architecture: A Primer for the Data Scientist", 2014)

"The practice of analyzing big data using mathematical models to develop insights, usually including machine learning algorithms as opposed to statistical methods."(Brenda L Dietrich et al, "Analytics Across the Enterprise", 2014)

"Data mining is the analysis of data for relationships that have not previously been discovered." (Piyush K Shukla & Madhuvan Dixit, "Big Data: An Emerging Field of Data Engineering", Handbook of Research on Security Considerations in Cloud Computing, 2015)

"A methodology used by organizations to better understand their customers, products, markets, or any other phase of the business." (Adam Gordon, "Official (ISC)2 Guide to the CISSP CBK" 4th Ed., 2015)

"Extracting information from a database to zero in on certain facts or summarize a large amount of data." (Faithe Wempen, "Computing Fundamentals: Introduction to Computers", 2015)

"It refers to the process of identifying and extracting patterns in large data sets based on artificial intelligence, machine learning, and statistical techniques." (Hamid R Arabnia et al, "Application of Big Data for National Security", 2015)

"The process of exploring and analyzing large amounts of data to find patterns." (Judith S Hurwitz, "Cognitive Computing and Big Data Analytics", 2015)

"Term used to describe analyzing large amounts of data to find patterns, correlations, and similarities." (Brittany Bullard, "Style and Statistics", 2016)

"The process of extracting meaningful knowledge from large volumes of data contained in data warehouses." (K  N Krishnaswamy et al, "Management Research Methodology: Integration of Principles, Methods and Techniques", 2016)

"A class of analytical applications that help users search for hidden patterns in a data set. Data mining is a process of analyzing large amounts of data to identify data–content relationships. Data mining is one tool used in decision support special studies. This process is also known as data surfing or knowledge discovery." (Daniel J Power & Ciara Heavin, "Decision Support, Analytics, and Business Intelligence" 3rd Ed., 2017)

"The process of collecting, searching through, and analyzing a large amount of data in a database to discover patterns or relationships." (Jonathan Ferrar et al, "The Power of People: Learn How Successful Organizations Use Workforce Analytics To Improve Business Performance", 2017)

"Data mining involves finding meaningful patterns and deriving insights from large data sets. It is closely related to analytics. Data mining uses statistics, machine learning, and artificial intelligence techniques to derive meaningful patterns." (Amar Sahay, "Business Analytics" Vol. I, 2018)

"The analysis of the data held in data warehouses in order to produce new and useful information." (Shon Harris & Fernando Maymi, "CISSP All-in-One Exam Guide" 8th Ed., 2018)

"The process of collecting critical business information from a data source, correlating the information, and uncovering associations, patterns, and trends." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

"The process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems." (Dmitry Korzun et al, "Semantic Methods for Data Mining in Smart Spaces", 2019)

"A technique using software tools geared for the user who typically does not know exactly what he's searching for, but is looking for particular patterns or trends. Data mining is the process of sifting through large amounts of data to produce data content relationships. It can predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. This is also known as data surfing." (Information Management)

"An analytical process that attempts to find correlations or patterns in large data sets for the purpose of data or knowledge discovery." (NIST SP 800-53)

"Extracting previously unknown information from databases and using that data for important business decisions, in many cases helping to create new insights." (Solutions Review)

"is the process of collecting data, aggregating it according to type and sorting through it to identify patterns and predict future trends." (Accenture)

"the process of analyzing large batches of data to find patterns and instances of statistical significance. By utilizing software to look for patterns in large batches of data, businesses can learn more about their customers and develop more effective strategies for acquisition, as well as increase sales and decrease overall costs." (Insight Software)

"The process of identifying commercially useful patterns or relationships in databases or other computer repositories through the use of advanced statistical tools." (Microsoft)

"The process of pulling actionable insight out of a set of data and putting it to good use. This includes everything from cleaning and organizing the data; to analyzing it to find meaningful patterns and connections; to communicating those connections in a way that helps decision-makers improve their product or organization." (KDnuggets)

"Data mining is the process of analyzing hidden patterns of data according to different perspectives for categorization into useful information, which is collected and assembled in common areas, such as data warehouses, for efficient analysis, data mining algorithms, facilitating business decision making and other information requirements to ultimately cut costs and increase revenue. Data mining is also known as data discovery and knowledge discovery." (Techopedia)

"Data mining is an automated analytical method that lets companies extract usable information from massive sets of raw data. Data mining combines several branches of computer science and analytics, relying on intelligent methods to uncover patterns and insights in large sets of information." (Sisense) [source]

"Data mining is the process of analyzing data from different sources and summarizing it into relevant information that can be used to help increase revenue and decrease costs. Its primary purpose is to find correlations or patterns among dozens of fields in large databases." (Logi Analytics) [source]

"Data mining is the process of analyzing massive volumes of data to discover business intelligence that helps companies solve problems, mitigate risks, and seize new opportunities." (Talend) [source]

"Data Mining is the process of collecting data, aggregating it according to type and sorting through it to identify patterns and predict future trends." (Accenture)

"Data mining is the process of discovering meaningful correlations, patterns and trends by sifting through large amounts of data stored in repositories. Data mining employs pattern recognition technologies, as well as statistical and mathematical techniques." (Gartner)

"Data mining is the process of extracting relevant patterns, deviations and relationships within large data sets to predict outcomes and glean insights. Through it, companies convert big data into actionable information, relying upon statistical analysis, machine learning and computer science." (snowflake) [source]

"Data mining is the work of analyzing business information in order to discover patterns and create predictive models that can validate new business insights. […] Unlike data analytics, in which discovery goals are often not known or well defined at the outset, data mining efforts are usually driven by a specific absence of information that can’t be satisfied through standard data queries or reports. Data mining yields information from which predictive models can be derived and then tested, leading to a greater understanding of the marketplace." (Informatica) [source]

09 February 2018

🔬Data Science: Normalization (Definitions)

"Mathematical transformations to generate a new set of values that map onto a different range." (Glenn J Myatt, "Making Sense of Data: A Practical Guide to Exploratory Data Analysis and Data Mining", 2006)

[Min–max normalization:] "Normalizing a variable value to a predetermine range." (Glenn J Myatt, "Making Sense of Data: A Practical Guide to Exploratory Data Analysis and Data Mining", 2006)

[function point normalization:] "Dividing a metric by the project’s function points to allow you to compare projects of different sizes and complexities." (Rod Stephens, "Beginning Software Engineering", 2015)

"For metrics, performing some calculation on a metric to account for possible differences in project size or complexity. Two general approaches are size normalization and function point normalization." (Rod Stephens, "Beginning Software Engineering", 2015)

[size normalization:] "For metrics, dividing a metric by an indicator of size such as lines of code or days of work. For example, bugs/KLOC tells you how buggy the code is normalized for the size of the project." (Rod Stephens, "Beginning Software Engineering", 2015)


07 February 2018

🔬Data Science: Hadoop (Definitions)

"An Apache-managed software framework derived from MapReduce and Bigtable. Hadoop allows applications based on MapReduce to run on large clusters of commodity hardware. Hadoop is designed to parallelize data processing across computing nodes to speed computations and hide latency. Two major components of Hadoop exist: a massively scalable distributed file system that can support petabytes of data and a massively scalable MapReduce engine that computes results in batch." (Marcia Kaufman et al, "Big Data For Dummies", 2013)

"An open-source software platform developed by Apache Software Foundation for data-intensive applications where the data are often widely distributed across different hardware systems and geographical locations." (Kenneth A Shaw, "Integrated Management of Processes and Information", 2013)

"Technology designed to house Big Data; a framework for managing data" (Daniel Linstedt & W H Inmon, "Data Architecture: A Primer for the Data Scientist", 2014)

"an Apache-managed software framework derived from MapReduce. Big Table Hadoop enables applications based on MapReduce to run on large clusters of commodity hardware. Hadoop is designed to parallelize data processing across computing nodes to speed up computations and hide latency. The two major components of Hadoop are a massively scalable distributed file system that can support petabytes of data and a massively scalable MapReduce engine that computes results in batch." (Judith S Hurwitz, "Cognitive Computing and Big Data Analytics", 2015)

"An open-source framework that is built to process and store huge amounts of data across a distributed file system." (Jason Williamson, "Getting a Big Data Job For Dummies", 2015)

"Open-source software framework for distributed storage and distributed processing of Big Data on clusters of commodity hardware." (Hamid R Arabnia et al, "Application of Big Data for National Security", 2015)

"A batch processing infrastructure that stores fi les and distributes work across a group of servers. The infrastructure is composed of HDFS and MapReduce components. Hadoop is an open source software platform designed to store and process quantities of data that are too large for just one particular device or server. Hadoop’s strength lies in its ability to scale across thousands of commodity servers that don’t share memory or disk space." (Benoy Antony et al, "Professional Hadoop®", 2016)

"Apache Hadoop is an open-source framework for processing large volume of data in a clustered environment. It uses simple MapReduce programming model for reliable, scalable and distributed computing. The storage and computation both are distributed in this framework." (Kaushik Pal, 2016)

"A framework that allow for the distributed processing for large datasets." (Neha Garg & Kamlesh Sharma, "Machine Learning in Text Analysis", 2020)

 "Hadoop is an open source implementation of the MapReduce paper. Initially, Hadoop required that the map, reduce, and any custom format readers be implemented and deployed to the cluster. Eventually, higher level abstractions were developed, like Apache Hive and Apache Pig." (Alex Thomas, "Natural Language Processing with Spark NLP", 2020)

"A batch processing infrastructure that stores files and distributes work across a group of servers." (Oracle)

"an open-source framework that is built to enable the process and storage of big data across a distributed file system." (Analytics Insight)

"Apache Hadoop is an open-source, Java-based software platform that manages data processing and storage for big data applications. Hadoop works by distributing large data sets and analytics jobs across nodes in a computing cluster, breaking them down into smaller workloads that can be run in parallel. Hadoop can process both structured and unstructured data, and scale up reliably from a single server to thousands of machines." (Databricks) [source]

"Hadoop is an open source software framework for storing and processing large volumes of distributed data. It provides a set of instructions that organizes and processes data on many servers rather than from a centralized management nexus." (Informatica) [source]

🔬Data Science: Semantics (Definitions)

 "The meaning of a model that is well-formed according to the syntax of a language." (Anneke Kleppe et al, "MDA Explained: The Model Driven Architecture: Practice and Promise", 2003)

"The part of language concerned with meaning. For example, the phrases 'my mother’s brother' and 'my uncle' are two ways of saying the same thing and, therefore, have the same semantic value." (Craig F Smith & H Peter Alesso, "Thinking on the Web: Berners-Lee, Gödel and Turing", 2008)

"The study of meaning (often the meaning of words). In business systems we are concerned with making the meaning of data explicit (structuring unstructured data), as well as making it explicit enough that an agent could reason about it." (Danette McGilvray, "Executing Data Quality Projects", 2008)

"The branch of philosophy concerned with describing meaning." (David C Hay, "Data Model Patterns: A Metadata Map", 2010)

"Having to do with meaning, usually of words and/or symbols (the syntax). Part of semiotic theory." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"The study of the meaning behind the syntax (signs and symbols) of a language or graphical expression of something. The semantics can only be understood through the syntax. The syntax is like the encoded representation of the semantics." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"The study of meaning. In the context of Big Data, semantics is the technique of creating meaningful assertions about data objects. A meaningful assertion, as used here, is a triple consisting of an identified data object, a data value, and a descriptor for the data value. In practical terms, semantics involves making assertions about data objects (i.e., making triples), combining assertions about data objects (i.e., merging triples), and assigning data objects to classes; hence relating triples to other triples. As a word of warning, few informaticians would define semantics in these terms, but I would suggest that most definitions for semantics would be functionally equivalent to the definition offered here." (Jules H Berman, "Principles of Big Data: Preparing, Sharing, and Analyzing Complex Information", 2013)

"Set of mappings forming a representation in order to define the meaningful information of the data." (Hamid R Arabnia et al, "Application of Big Data for National Security", 2015)

"Semantics is a branch of linguistics focused on the meaning communicated by language." (Alex Thomas, "Natural Language Processing with Spark NLP", 2020)

06 February 2018

🔬Data Science: Data Profiling (Definitions)

"A process focused on generating data metrics and measuring data quality. The data metrics can be collected at the column level, e.g., value frequency, nullity measurements, and uniqueness/match quality measurements; at the table level, e.g., primary key violations; or cross-table relationships, e.g., foreign key violations." (Alex Berson & Lawrence Dubov, "Master Data Management and Customer Data Integration for a Global Enterprise", 2007)

"A set of techniques for searching through data looking for potential errors and anomalies, such as similar data with different spellings, data outside boundaries and missing values." (Keith Gordon, "Principles of Data Management", 2007)

"Data profiling (and analysis services) provides functionality to understand the quality, structure, and relationships of data across enterprise systems, from which data cleansing and standardization rules can be determined for improving the overall data quality and consistency." (Martin Oberhofer et al,"Enterprise Master Data Management", 2008)

"A process for looking at the data within the source systems and understanding the data elements and the anomalies." (Tony Fisher, "The Data Asset", 2009)

"An approach to data quality analysis, using statistics to show patterns of usage, and patterns of contents, and automated as much as possible. Some profiling activities must be done manually, but most can be automated." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"Data profiling is used to assess the existing state of data quality. It is also used to understand the duplicates in the master data or the gaps in linkages. It can be used to understand the scope of data enrichment to enhance the value of customer data assets." (Saumya Chaki, "Enterprise Information Management in Practice", 2015)

"An automated method of analyzing large amounts of data to determine its quality and integrity." (Gregory Lampshire, "The Data and Analytics Playbook", 2016)

"Data profiling assesses a set of data and provides information on the values, the length of strings, the level of completeness, and the distribution patterns of each column." (Robert Hawker, "Practical Data Quality", 2023)

"The process of examining the data available in different data sources and collecting statistics and information about this data. Data profiling helps to assess the quality level of the data according to a defined goal." (Talend)

"Data profiling, a critical first step in data migration, automates the identification of problematic data and metadata and enables companies to correct inconsistencies, redundancies and inaccuracies in corporate databases." (Information Management)

"Data profiling is the act of examining, cleansing and analyzing an existing data source to generate actionable summaries. Proper techniques of data profiling verify the accuracy and validity of data, leading to better data-driven decision making that customers can use to their advantage." (snowflake) [source]

🔬Data Science: Pig (Definitions)

"A programming interface for programmers to create MapReduce jobs within Hadoop." (Jason Williamson, "Getting a Big Data Job For Dummies", 2015)

"A programming language designed to handle any type of data. Pig helps users to focus more on analyzing large datasets and less time writing map programs and reduce programs. Like Hive and Impala, Pig is a high-level platform used for creating MapReduce programs more easily. The programming language Pig uses is called Pig Latin, and it allows you to extract, transform, and load (ETL) data at a very high level. This greatly reduces the effort if this was written in JAVA code; PIG is only a fraction of that." (Benoy Antony et al, "Professional Hadoop®", 2016)

"An open-source platform for analyzing large data sets that consists of the following: (1) Pig Latin scripting language; (2) Pig interpreter that converts Pig Latin scripts into MapReduce jobs. Pig runs as a client application." (Oracle)


05 February 2018

🔬Data Science: Machine Learning [ML] (Definitions)

"Machine learning is a field of study that gives computers the ability to learn without being explicitly programmed." (Arthur Samuel, 1959) [attributed]

"Computer methods for accumulating, changing, and updating knowledge in an AI computer system." (Nikola K Kasabov, "Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering", 1996)

"A term often used to denote the application of generic model-fitting or classification algorithms for predictive data mining. This differs from traditional statistical data analysis, which is usually concerned with the estimation of population parameters by statistical inference and p-values. The emphasis in data mining machine learning algorithms is usually on the accuracy of the prediction as opposed to discovering the relationship and influences of different variables." (Robert Nisbet et al, "Handbook of statistical analysis and data mining applications", 2009)

"A discipline grounded in computer science, statistics, and psychology that includes algorithms that learn or improve their performance based on exposure to patterns in data, rather than by explicit programming." (Judith S Hurwitz, "Cognitive Computing and Big Data Analytics", 2015)

"Machine learning is the intersection between theoretically sound computer science and practically noisy data. Essentially, it’s about machines making sense out of data in much the same way that humans do." (Matthew Kirk, "Thoughtful Machine Learning", 2015)

"Computer programs that have the ability to learn over time as new data becomes available. This type of analytical programming can learn more about a customer’s online shopping behavior over time and start to predict which items the customer will likely click on and purchase." (Brittany Bullard, "Style and Statistics", 2016)

"Machine learning is home to numerous techniques for creating classifiers by training them with already correctly categorized examples. This training is called supervised learning; it is supervised because it starts with instances labeled by category, and it involves learning because over time the classifier improves its performance by adjusting the weights for features that distinguish the categories. But strictly speaking, supervised learning techniques do not learn the categories; they implement and apply categories that they inherit or are given to them." (Robert J Glushko, "The Discipline of Organizing: Professional Edition" 4th Ed., 2016)

"A subdiscipline of computer science that addresses similar challenges to traditional statistical modeling, but with different techniques and a stronger focus on predictive accuracy." (Jonathan Ferrar et al, "The Power of People: Learn How Successful Organizations Use Workforce Analytics To Improve Business Performance", 2017)

"Machine learning describes a broad set of methods for extracting meaningful patterns from existing data and applying those patterns to make decisions or predictions on future data." (Benjamin Bengfort et al, "Applied Text Analysis with Python: Enabling Language-Aware Data Products with Machine Learning", 2018)

"Machine learning is a method of designing systems that can learn, adjust, and improve based on the data fed to them. Machine learning works based on predictive and statistical algorithms that are provided to these machines. The algorithms are designed to learn and improve as more data flows through the system." (Amar Sahay, "Business Analytics" Vol. I, 2018)

"The field of computer science research that focuses on developing and evaluating algorithms that can extract useful patterns from data sets. A machine learning algorithm takes a data set as input and returns a model that encodes the patterns the algorithm extracted from the data." (John D Kelleher & Brendan Tierney, "Data science", 2018)

[In-Database Machine Learning:] "Using machine-learning algorithms that are built into the database solution. The benefit of in-database machine learning is that it reduces the time spent on moving data in and out of databases for analysis." (John D Kelleher & Brendan Tierney, "Data science", 2018)

"The science of developing techniques to give the computer inference and deduction capabilities to achieve diverse processing tasks autonomously." (Jorge Manjarrez-Sanchez, "In-Memory Analytics", 2018)

"A facet of AI that focuses on algorithms, allowing machines to learn without being programmed and change when exposed to new data." (Kirti R Bhatele et al, "The Role of Artificial Intelligence in Cyber Security", 2019)

"A field of artificial intelligence that uses statistical techniques to give computer systems the ability to learn." (Nil Goksel & Aras Bozkurt, "Artificial Intelligence in Education: Current Insights and Future Perspectives", 2019)

"A method of designing a sequence of actions to solve a problem that optimizes automatically through experience and with limited or no human intervention." (Soraya Sedkaoui, "Big Data Analytics for Entrepreneurial Success", 2019)

"The methods used to understand the patterns in the data and to obtain results from these patterns using various algorithms." (Tolga Ensari et al, "Overview of Machine Learning Approaches for Wireless Communication", 2019)

"A branch of artificial intelligence that focuses on data analysis methods that allow for automation of the process of analytical model building." (Timofei Bogomolov et al, "Identifying Patterns in Fresh Produce Purchases: The Application of Machine Learning Techniques", 2020)

"A discipline focused on the development and evaluation of algorithms that permit computers to use patterns, trends, and associations in data to perform tasks without being programmed by a human." (Osman Kandara & Eugene Kennedy, "Educational Data Mining: A Guide for Educational Researchers", 2020)

"A field of study of algorithms and statistical methods that allows software application to predict the accurate result." (S Kayalvizhi & D Thenmozhi, "Deep Learning Approach for Extracting Catch Phrases from Legal Documents", 2020)

"Is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed." (Rajandeep Kaur & Rajneesh Rani, "Comparative Study on ASD Identification Using Machine and Deep Learning", 2020)

"Is one of many subfields of artificial intelligence concerning the ways that computers learn from experience to improve their ability to think, plan, decide and act." (Lejla Banjanović-Mehmedović & Fahrudin Mehmedović, "Intelligent Manufacturing Systems Driven by Artificial Intelligence in Industry 4.0", 2020)

"It is an application of the artificial intelligence in which machines can automatically learn and solve problems using the learned experience." (Shouvik Chakraborty & Kalyani Mali, "An Overview of Biomedical Image Analysis From the Deep Learning Perspective", 2020)

"It refers to an application of artificial intelligence focusing on algorithms which can be used for building models (e.g., based on statistics) from input data. Such automatic analytical models need to provide outputs based on the learning relations between input and output values. The algorithms are often categorized as supervised, semi-supervised or unsupervised." (Ana Gavrovska & Andreja Samčović, "Intelligent Automation Using Machine and Deep Learning in Cybersecurity of Industrial IoT", 2020)

"Machine learning, in the simplest terms, is the analysis of statistics to help computers make decisions base on repeatable characteristics found in the data." (Vardhan K Agrawal, "Mastering Machine Learning with Core ML and Python", 2020)

"Machine learning is a field of computer science and mathematics that focuses on algorithms for building and using models “learned” from data." (Alex Thomas, "Natural Language Processing with Spark NLP", 2020)

"Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed." (Mohammad Haroon et al, Application of Machine Learning In Forensic Science, 2020)

"Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves." (R Murugan, "Implementation of Deep Learning Neural Network for Retinal Images", 2020)

"Machine learning is branch of data science which has concern with the design and development of algorithm to develop a system that can learn from data, identify the complex patterns and provide intelligent, reliable, repeatable decisions and results with minimal human interaction based on the provided input." (Neha Garg & Kamlesh Sharma, "Machine Learning in Text Analysis", 2020)

"A computer program having the capability to learn and adapt to new data without human assistance." (Sue Milton, "Data Privacy vs. Data Security", 2021)

"A rising area in computer science, where the computer systems are programmed to learn information from rich data sets to produce reliable results to a given problem." (Jinnie Shin et al, "Automated Essay Scoring Using Deep Learning Algorithms", 2021)

"Ability of a machine to learn from the data it is presented using different techniques that are supervised or non-supervised." (Sujata Ramnarayan, "Marketing and Artificial Intelligence: Personalization at Scale", 2021)

"Is a type of artificial intelligence where computer teaches itself the solution to a query discovering patterns in sets of data and matching fresh parts of data the based on probability." (James O Odia & Osaheni T Akpata, "Role of Data Science and Data Analytics in Forensic Accounting and Fraud Detection", 2021)

"It is again a sub set of AI in which we classify the data with the help of input data set, ANN, SVM, Random Forest are some of the algorithm used in this case." (Ajay Sharma, "Smart Agriculture Services Using Deep Learning, Big Data, and IoT", 2021)

"It refers to developing the ability in computers to use available data to train themselves automatically, and to learn from its own experiences without being explicitly programmed." (Shatakshi Singhet al, "A Survey on Intelligence Tools for Data Analytics", 2021)

"Machine learning is a scientific approach to analyse available data using algorithms and statistical models to accomplish a specific task by utilizing the patterns evolved." (Vandana Kalra et al, "Machine Learning and Its Application in Monitoring Diabetes Mellitus", 2021)

"Machine Learning is a statistical or mathematical model that performs data analysis, prediction, and clustering. This science is a subfield of Artificial Intelligence." (Sayani Ghosal & Amita Jain, "Research Journey of Hate Content Detection From Cyberspace", 2021)

"Machine learning is an application of artificial intelligence that provides systems the ability to automatically learn and improve from experience without being explicitly programmed." (Sercan Demirci et al, "Detection of Diabetic Retinopathy With Mobile Application Using Deep Learning", 2021)

"Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed and in the process developing computer programs that can access data and use it to learn for themselves." (Hari K Kondaveeti et al, "Deep Learning Applications in Agriculture: The Role of Deep Learning in Smart Agriculture", 2021)

"Set of knowledge discovery techniques for intelligent data analysis in order to find hidden patterns and associations, devise rules and make predictions." (Nenad Stefanovic, "Big Data Analytics in Supply Chain Management", 2021)

"The study of computer algorithms that improve automatically through experience. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as 'training data', to make predictions or decisions without being explicitly programmed to do so." (Jan Bosch et al, "Engineering AI Systems: A Research Agenda", Artificial Intelligence Paradigms for Smart Cyber-Physical Systems, 2021)

"This can be regarded as a subset of AI which refers to analyzing structured data and identifying trends (correlations) for specific outcomes and using that information to predict future values (causation)." (Vijayaraghavan Varadharajan & Akanksha Rajendra Singh, "Building Intelligent Cities: Concepts, Principles, and Technologies", 2021)

"A discipline that studies methods and algorithms of automated learning from data through which computer systems can adjust their operations according to feedback they receive. A term strongly related to artificial intelligence, data mining, statistical methods." (KDnuggets)

"A process where a computer uses an algorithm to gain understanding about a set of data, then makes predictions based on its understanding." (KDnuggets)

"A type of artificial intelligence that provides computers with the ability to learn without being specifically programmed to do so, focusing on the development of computer applications that can teach themselves to change when exposed to new data." (Solutions Review)

"is a type of artificial intelligence that enable systems to learn patterns from data and subsequently improve from experience. It is an interdisciplinary field that includes information theory, control theory, statistics, and computer science. As it gathers and sorts more information, machine learning constantly gets better at identifying types and forms of data with little or no hard coded rules." (Accenture)

"Machine learning is a branch of artificial intelligence that deals with self-improving algorithms. The algorithms 'learn' by recording the results of vast quantities of data processing actions. Over time, the algorithm improves its functionality without being explicitly programmed." (Xplenty) [source]

"Machine learning is a subset of artificial intelligence (AI) that deals with the extracting of patterns from data, and then uses those patterns to enable algorithms to improve themselves with experience. This type of learning can be used to help computers recognize patterns and associations in massive amounts of data, and make predictions and forecasts based on its findings." (RapidMiner) [source]

"Machine learning (ML) is a type of artificial intelligence (AI) that allows software applications to become more accurate at predicting outcomes without being explicitly programmed to do so. Machine learning algorithms use historical data as input to predict new output values." (Techtarget) [source]

"Machine Learning is a type of artificial intelligence that enable systems to learn patterns from data and subsequently improve from experience. It is an interdisciplinary field that includes information theory, control theory, statistics, and computer science. As it gathers and sorts more information, machine learning constantly gets better at identifying types and forms of data with little or no hard coded rules." (Accenture)

"Machine learning is a cutting-edge programming technique used to automate the construction of analytical models and enable applications to perform specified tasks more efficiently without being explicitly programmed. Machine learning allows system to automatically learn and increase their accuracy in task performance through experience." (Sumo Logic) [source]

"[Machine Learning is] a type of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed. It focuses on the development of computer programs that can teach themselves to grow and change when exposed to new data. The process of machine learning is similar to that of data mining. Both systems search through data to look for patterns. However, instead of extracting data for human comprehension - as is the case in data mining applications - machine learning uses that data to improve the program's own understanding. Machine learning programs detect patterns in data and adjust program actions accordingly." (Teradata) [source]

"Machine learning is the field of study that enables computers the ability to learn without being explicitly programmed." (Adobe)

"Machine learning is the subset of artificial intelligence (AI) that focuses on building systems that learn - or improve performance - based on the data they consume." (Oracle)

"Part of artificial intelligence where machines learn from what they are doing and become better over time." (Analytics Insight)
Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.