21 March 2009

🛢DBMS: Constraints (Definitions)

"A restriction placed upon the value that can be entered into a column or a row. Values can be equal to, greater than, or less than. A constraint limits the input." (Patrick Dalton, "Microsoft SQL Server Black Book", 1997)

"A property assigned to a table column that prevents certain types of non-valid data values from being placed in the column." (Anthony Sequeira & Brian Alderman, "The SQL Server 2000 Book", 2003)

"A condition defined against a column or columns on a table in the database to enforce business rules or relationships between tables in the database." (Bob Bryla, "Oracle Database Foundations", 2004)

"A database object that can be applied to tables to enforce different types of data integrity." (Sara Morganand & Tobias Thernstrom , "MCITP Self-Paced Training Kit : Designing and Optimizing Data Access by Using Microsoft SQL Server 2005 - Exam 70-442", 2007)

"(1) A restriction on a business action and the resulting data. (2) The database mechanism for enforcing such." (Craig S Mullins, "Database Administration: The Complete Guide to DBA Practices and Procedures 2nd Ed", 2012)

"A rule that limits the values that can be inserted, deleted, or updated in a table." (Sybase)

20 March 2009

🛢DBMS: Data Source (Definitions)

"The source of data for an object such as a cube or a dimension. Also, the specification of the information necessary to access source data. Sometimes refers to a DataSource object." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"A repository for storing data. An ODBC/JDBC term." (Peter Gulutzan & Trudy Pelzer, "SQL Performance Tuning", 2002)

"A file that contains the connection string that Analysis Services uses to connect to the database that hosts the data as well as any necessary authentication credentials." (Reed Jacobsen & Stacia Misner, "Microsoft SQL Server 2005 Analysis Services Step by Step", 2006)

"A system or application that generates data for use by another system or by an end user. The data source may also be the system of origin for the data." (Evan Levy & Jill Dyché, "Customer Data Integration", 2006)

"An information store that can be connected to by various SQL Server technologies such as SQL Server Reporting Services for data retrieval." (Marilyn Miller-White et al, "MCITP Administrator: Microsoft® SQL Server™ 2005 Optimization and Maintenance 70-444", 2007)

"An entity or group of entities from which data can be collected. The entities may be people, objects, or processes." (Jens Mende, "Data Flow Diagram Use to Plan Empirical Research Projects", 2009)

"An object containing information about the location of data. The data source leverages a connection string." (Jim Joseph et al, "Microsoft® SQL Server™ 2008 Reporting Services Unleashed", 2009)

"A repository of data to which a federated server can connect and then retrieve data by using wrappers. A data source can contain relational databases, XML files, Excel spreadsheets, table-structured files, or other objects. In a federated system, data sources seem to be a single collective database." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

19 March 2009

🛢DBMS: Scalar Aggregate (Definitions)

"An aggregate function that produces a single value from a select statement that does not include a group by clause. This is true whether the aggregate function is operating on all the rows in a table or on a subset of rows defined by a where clause." (Karen Paulsell et al, "Sybase SQL Server: Performance and Tuning Guide", 1996)

[vector aggregate:] "A value that results from using an aggregate function with a group by clause." (Karen Paulsell et al, "Sybase SQL Server: Performance and Tuning Guide", 1996)

"When aggregate functions are applied to the whole or partial table without the GROUP BY clause and return only one row." (Owen Williams, "MCSE TestPrep: SQL Server 6.5 Design and Implementation", 1998)

[vector aggregates:] "When aggregate functions are used with the GROUP BY clause, they return values for each group. These are called vector aggregates." (Owen Williams, "MCSE TestPrep: SQL Server 6.5 Design and Implementation", 1998)

"A function applied to all of the rows in a table (producing a single value per function). An aggregate function in the select list with no GROUP BY clause applies to the whole table and is an example of a scalar." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

[vector aggregate:] "Functions applied to all rows that have the same value in a specified column or expression by using the GROUP BY clause and, optionally, the HAVING clause (producing a value for each group per function)." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"An aggregate value that is calculated on the data source. Depending on the data source, server aggregates can be treated as detail data or as aggregates based on the dataset option InterpretSubtotalsAsDetails." (Microsoft Technet)

[aggregate of aggregates:] "A summary value calculated from aggregates, such as the maximum of a set of sums." (Microsoft Technet)

 "An aggregate function, such as MIN(), MAX(), or AVG(), that is specified in a SELECT statement column list that contains only aggregate functions." (Microsoft Technet)

18 March 2009

🛢DBMS: Data Independence (Definitions)

[logical *:] "Application programs and terminal activities remain logically unimpaired when information preserving changes of any kind that theoretically permit unimpairment are made to the base tables." (S. Sumathi & S. Esakkirajan, "Fundamentals of Relational Database Management Systems", 2007)

[physical *]"Application programs and terminal activities remain logically unimpaired whenever any changes are made in either storage representation or access methods." (S. Sumathi & S. Esakkirajan, "Fundamentals of Relational Database Management Systems", 2007)

"A condition that exists when data access is unaffected by changes in the physical data storage characteristics." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management 9th Ed", 2011)

"Data independence is the characteristic that enables data to be easily combined into usually unlimited number of different structures." (Michael M David & Lee Fesperman, "Advanced SQL Dynamic Data Modeling and Hierarchical Processing", 2013)

"A condition that exists when data access is unaffected by changes in the physical data storage characteristics." (Carlos Coronel & Steven Morris, "Database Systems: Design, Implementation, & Management"  11th Ed., 2014)

"The isolation of data from the use of the data such that a change to one does not affect the other." (George Tillmann, "Usage-Driven Database Design: From Logical Data Modeling through Physical Schmea Definition", 2017)

"Data independence is a database management system (DBMS) characteristic that lets programmers modify information definitions and organization without affecting the programs or applications that use it. Such property allows various users to access and process the same data for different purposes, regardless of changes made to it." (Techslang) [source]

"The property of being able to change the overall logical or physical structure of the data without changing the application program's view of the data." (GRC Data Intelligence)

"The degree to which the logical view of a database is immune to changes in the physical structure of the database." (IEEE 610.5-1990)

17 March 2009

🛢DBMS: Aggregate Data (Definitions)

"A group such as grouped data. For example, when aggregating data, we are grouping data. A common aggregate function is Avg (average). It looks at a group of data (an aggregate) and provides an average." (Darril Gibson, "MCITP SQL Server 2005 Database Developer All-in-One Exam Guide", 2008)

"Data resulting from processes that combine and summarize atomic data." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

[aggregate:] "Pertaining to a combination of multiple values." (Microsoft, "SQL Server 2012 Glossary", 2012)

"Data that is the result of applying a process to combine data elements collectively or in summary form. The SQL SELECT List does this very easily and offers quite a bit of dynamic control." (Michael M David & Lee Fesperman, "Advanced SQL Dynamic Data Modeling and Hierarchical Processing", 2013)

[aggregate operation:] "An operation on a data structure as a whole, as opposed to an operation on an individual component of the data structure" (Nell Dale & John Lewis, "Computer Science Illuminated, 6th Ed.", 2015)

[aggregated data] "Refers to data that has been scrubbed of any personally or entity identifiable information and then generally combined with similar information from other parties." (James R Kalyvas & Michael R Overly, "Big Data: A Businessand Legal Guide", 2015)

"Structured data that results from applying a process to more detailed data - data that is summarized or averaged." (Ciara Heavin & Daniel J Power, "Decision Support, Analytics, and Business Intelligence" 3rd Ed., 2017)

[data aggregate] "A collection of two or more data items that are treated as a unit." (IEEE 610.5-1990)

16 March 2009

🛢DBMS: SQL Injection (Definitions)

"SQL injection is a technique that exploits security vulnerabilities in the application layer and middle tier, allowing users to execute arbitrary SQL statements on a server." (Michael Coles, "Pro T-SQL 2008 Programmer's Guide", 2008)

"A security vulnerability that occurs in the persistence/database layer of a Web application. This vulnerability is derived from the incorrect escaping of variables embedded in SQL statements. It is in fact an instance of a more general class of vulnerabilities based on poor input validation and bad design that can occur whenever one programming or scripting language is embedded inside another." (Mark S Merkow & Lakshmikanth Raghavan, "Secure and Resilient Software Development", 2010)

"A form of Web hacking whereby SQL statements are specified in a Web form to expose data to the attacker." (Craig S Mullins, "Database Administration", 2012)

"SQL injection is a technique that exploits security vulnerabilities in the application layer and middle tier, allowing users to execute arbitrary SQL statements on a server." (Jay Natarajan et al, "Pro T-SQL 2012 Programmer's Guide 3rd Ed", 2012)

"The process of manipulating a web application to run SQL commands sent by an attacker." (Mark Rhodes-Ousley, "Information Security: The Complete Reference, Second Edition, 2nd Ed.", 2013)

"A technique that exploits security vulnerabilities in the application layer and middle tier, allowing users to execute arbitrary SQL statements on a server." (Miguel Cebollero et al, "Pro T-SQL Programmer’s Guide 4th Ed", 2015)

🛢DBMS: Hash Table (Definitions)

"A data structure used internally by Perl for implementing associative arrays (hashes) efficiently. See also bucket." (Jon Orwant et al, "Programming Perl" 4th Ed., 2012)

[hash cluster:] "A type of table cluster that is similar to an indexed cluster, except the index key is replaced with a hash function. No separate cluster index exists. In a hash cluster, the data is the index." (Oracle, "Database SQL Tuning Guide Glossary", 2013)

"An in-memory data structure that associates join keys with rows in a hash join. For example, in a join of the employees and departments tables, the join key might be the department ID. A hash function uses the join key to generate a hash value. This hash value is an index in an array, which is the hash table." (Oracle, "Database SQL Tuning Guide Glossary", 2013)

"The data structure used to store elements using hashing" (Nell Dale et al, "Object-Oriented Data Structures Using Java" 4th Ed., 2016)

"An object that is like a dictionary or an associative array. A hash table stores and retrieves elements using key values called hashcodes. See also hashcode." (Daniel Leuck et al, "Learning Java" 5th Ed., 2020)

[sorted hash cluster:] "A hash cluster that stores the rows corresponding to each value of the hash function in such a way that the database can efficiently return them in sorted order. The database performs the optimized sort internally." (Oracle, "Oracle Database Concepts")

"An in-memory data structure that associates join keys with rows in a hash join. For example, in a join of the employees and departments tables, the join key might be the department ID. A hash function uses the join key to generate a hash value. This hash value is an index in an array, which is the hash table." (Oracle, "Oracle Database Concepts")

"A two-dimensional table of items in which a hash function is applied to the key of each item to determine its hash value. The hash value identifies each item's primary position in the table, and if this position is already occupied, the item is inserted either in an overflow table or in another available position in the table." (IEEE 610.5-1990)

🛢DBMS: Hash Index (Definitions)

"A hashing algorithm is used to organize an index into a sequence, where each indexed value is retrievable based on the result of the hash key value. Hash indexes are efficient with integer values, but are usually subject to overflow as a result of changes." (Gavin Powell, "Beginning Database Design", 2006)

"An index based on an ordered list of hash values." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management" 9th Ed., 2011)

"An index based on an ordered list of hash values." (Carlos Coronel & Steven Morris, "Database Systems: Design, Implementation, & Management" 11th Ed., 2014)

 "A type of index intended for queries that use equality operators, rather than range operators such as greater-than or BETWEEN. It is available for MEMORY tables. Although hash indexes are the default for MEMORY tables for historic reasons, that storage engine also supports B-tree indexes, which are often a better choice for general-purpose queries. MySQL includes a variant of this index type, the adaptive hash index, that is constructed automatically for InnoDB tables if needed based on runtime conditions." (MySQL, "MySQL 8.0 Reference Manual Glossary")

[adaptive hash index:] "An optimization for InnoDB tables that can speed up lookups using - and IN operators, by constructing a hash index in memory. MySQL monitors index searches for InnoDB tables, and if queries could benefit from a hash index, it builds one automatically for index pages that are frequently accessed." (MySQL, "MySQL 8.0 Reference Manual Glossary")

"Hash indexes are file structures that can be used either to resolve queries by accessing the index instead of its underlying base table or to enhance access performance when they do not cover a query by providing a secondary access path to requested base table rows. They can either substitute for or point to base table rows." (Teradata)

🛢DBMS: Query Plan [QP] (Definitions)

"The ordered set of steps required to carry out a query, complete with the access methods chosen for each table." (Karen Paulsell et al, "Sybase SQL Server: Performance and Tuning Guide", 1996)

"A portion of a DBMS that determines the most efficient sequence of relational algebra operations to use to satisfy a query." (Jan L Harrington, "Relational Database Design and Implementation" 3rd Ed., 2009)

"The plan produced by an optimizer for processing a query." (S. Sumathi & S. Esakkirajan, "Fundamentals of Relational Database Management Systems", 2007)

"A query plan is a sequence of logical and physical operators and data flows that the SQL query optimizer returns for use by the query processor to retrieve or modify data." (Michael Coles, "Pro T-SQL 2008 Programmer's Guide", 2008)

"Once the query optimizer determines the best way to execute a query, it creates a query plan. This identifies all the elements of the query, including what indexes are used, what types of joins are employed, and more." (Darril Gibson, "MCITP SQL Server 2005 Database Developer All-in-One Exam Guide", 2008)

"A sequence of logical and physical operators and data flows that the SQL query optimizer returns for use by the query processor to retrieve or modify data." (Miguel Cebollero et al, "Pro T-SQL Programmer’s Guide" 4th Ed., 2015)

[adaptive query plan:] "An execution plan that changes after optimization because run-time conditions indicate that optimizer estimates are inaccurate. An adaptive query plan has different built-in plan options. During the first execution, before a specific subplan becomes active, the optimizer makes a final decision about which option to use. The optimizer bases its choice on observations made during the execution up to this point. Thus, an adaptive query plan enables the final plan for a statement to differ from the default plan." (Oracle)

[default plan:] "For an adaptive plan, the execution plan initially chosen by the optimizer using the statistics from the data dictionary. The default plan can differ from the final plan." (Oracle)

[execution plan:] "The combination of steps used by the database to execute a SQL statement. Each step either retrieves rows of data physically from the database or prepares them for the session issuing the statement." (Oracle)

[query execution plan:] "The set of decisions made by the optimizer about how to perform a query most efficiently, including which index or indexes to use, and the order in which to join tables." (MySQL)

🛢DBMS: Network Model (Definitions)

[network database model:] "Essentially a refinement of the hierarchical database model. The network model allows child tables to have more than one parent, thus creating a networked-like table structure. Multiple parent tables for each child allow for many-to-many relationships, in addition to one-to-many relationships." (Gavin Powell, "Beginning Database Design", 2006)

[complex network data model:] "A navigational data model that supports direct many-to-many relationships." (Jan L Harrington, "Relational Database Dessign: Clearly Explained" 2nd Ed., 2002)

[simple network data model:] "A navigational data model that supports only one-to-many relationships but allows an entity to have an unlimited number of parent entities." (Jan L Harrington, "Relational Database Dessign: Clearly Explained" 2nd Ed., 2002)

[complex network data model:] "A navigational data model that permits direct many-to-many relationships as well as one-to-many and one-to-one relationships." (Jan L Harrington, "Relational Database Design and Implementation: Clearly explained" 3rd Ed., 2009)

[simple network data model:"A legacy data model where all relationships are one-to-many or one-toone; a navigational data model where relationships are represented with physical data structures such as pointers." (Jan L Harrington, "Relational Database Design and Implementation: Clearly explained" 3rd Ed., 2009)

"A data model standard created by the CODASYL Data Base Task Group in the late 1960s. It represented data as a collection of record types and relationships as predefined sets with an owner record type and a member record type in a 1:M relationship." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management" 9th Ed., 2011)

"A DBMS architecture where record types are organized in a many-to-many structure consisting of multiple parent-child sets." (George Tillmann, "Usage-Driven Database Design: From Logical Data Modeling through Physical Schmea Definition", 2017)

"A network model is a database model that is designed as a flexible approach to representing objects and their relationships. A unique feature of the network model is its schema, which is viewed as a graph where relationship types are arcs and object types are nodes." (Techopedia) [source]


15 March 2009

🛢DBMS: Precision (Definitions)

"The maximum number of decimal digits that can be stored by numeric and decimal datatypes. The precision includes all digits, both to the right and to the left of the decimal point." (Karen Paulsell et al, "Sybase SQL Server: Performance and Tuning Guide", 1996)

"The maximum total number of decimal digits that can be stored, both to the left and right of the decimal point." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"The degree of detail used to state a numeric quantity; for example, writing a value to two decimal places instead of five decimal places. Contrast with accuracy." (Richard D Stutzke, "Estimating Software-Intensive Systems: Projects, Products, and Processes", 2005)

"This is the total number of digits that can be stored in an object that uses the decimal datatype." (Joseph L Jorden & Dandy Weyn, "MCTS Microsoft SQL Server 2005: Implementation and Maintenance Study Guide - Exam 70-431", 2006)

"Refers to the preciseness with which a numerical quantity is expressed." (Michael Fitzgerald, "Learning Ruby", 2007)

"In a floating-point number, the number of digits to the right of the decimal point." (Jan L Harrington, "SQL Clearly Explained" 3rd Ed., 2010)

"The maximum number of significant digits that can be represented" (Nell Dale & John Lewis, "Computer Science Illuminated" 6th Ed., 2015)

"An attribute of a number that describes the total number of binary or decimal digits. An attribute of a timestamp that describes the total number of decimal digits in the fractional seconds part of the value." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

🛢DBMS: Hash Joins (Definitions)

"A sophisticated join algorithm that builds an interim structure to derive result sets." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"A method for producing a joined table. Given two input tables Table1 and Table2, processing is as follows: (a) For each row in Table1, produce a hash. Assign the hash to a hash bucket. (b) For each row in Table2, produce a hash. Check if the hash is already in the hash bucket. If it is: there's a join. If it is not: there's no join." (Peter Gulutzan & Trudy Pelzer, "SQL Performance Tuning", 2002)

"An efficient method of searching two tables to be joined when they have very low selectivity (i.e., very few matching values). Common values are matched in fast memory, then the rest of the data record is obtained using hashing mechanisms to access the disk only once for each record." (Sam Lightstone et al, "Physical Database Design: The Database Professional’s Guide to Exploiting Indexes, Views, Storage, and More", 2007)

"A method for joining large data sets. The database uses the smaller of two data sets to build a hash table on the join key in memory. It then scans the larger data set, probing the hash table to find the joined rows." (Oracle, "Database SQL Tuning Guide Glossary", 2013)

"The hash join is based on a hash function that provides access to items in the joining data structure in constant time. A hash function maps arbitrary inputs to fixed length keys, even though the inputs might have variable lengths. The joining data structure for the hash join is a so-called hash map, which implements an associative array that maps keys to values." (Hasso Plattner, "A Course in In-Memory Data Management: The Inner Mechanics of In-Memory Databases" 2nd Ed., 2014)

 "A join in which the database uses the smaller of two tables or data sources to build a hash table in memory. The database scans the larger table, probing the hash table for the addresses of the matching rows in the smaller table." (Oracle, "Oracle Database Concepts")

🛢DBMS: Phantom read

"Occur when one transaction reads a set of rows that satisfy a search condition, and then a second transaction modifies the data (through an insert, delete, update, and so on). If the first transaction repeats the read with the same search conditions, it obtains a different set of rows." (Karen Paulsell et al, "Sybase SQL Server: Performance and Tuning Guide", 1996)

"A phenomenon that occurs when a transaction attempts to select a row that does not exist and a second transaction inserts the row before the first transaction finishes. If the row is inserted, the row appears as a phantom to the first transaction, inconsistently appearing and disappearing." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"A problem arising with concurrent transactions. The Phantom problem occurs when a transaction reads multiple rows twice; once before and once after another transaction does a data change that affects the search condition in the first transaction's reads. The result is that Transaction #1 gets a different (larger) result set back from its second read. You can avoid Phantoms by using an isolation level of SERIALIZABLE." (Peter Gulutzan & Trudy Pelzer, "SQL Performance Tuning", 2002)

"A phantom read (or phantom row) describes the occurrence of data returned by a statement in a transaction which was not returned by an earlier statement (with the same WHERE clause) within the same transaction." (Sara Morganand & Tobias Thernstrom , "MCITP Self-Paced Training Kit : Designing and Optimizing Data Access by Using Microsoft SQL Server 2005 - Exam 70-442", 2007)

"A problem with uncontrolled concurrent use of a database that occurs when a transaction reads data for the second time and determines that new rows have been inserted by another transaction." (Jan L Harrington, "Relational Database Design and Implementation, 3rd Ed.", 2009)

"The difference in result tables that occurs when a nonserialized transaction reads the same data twice and different rows are retrieved as a result of the actions of other interleaves transactions." (Jan L Harrington, "SQL Clearly Explained 3rd Ed. ", 2010)

"A table row that can be read by application processes that are executing with any isolation level except repeatable read. When an application process issues the same query multiple times within a single unit of work, additional rows can appear between queries because of the data being inserted and committed by application processes that are running concurrently." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

"Pertaining to the insertion of a new row or the deletion of an existing row in a range of rows that were previously read by another task, where that task has not yet committed its transaction." (Microsoft, "SQL Server 2012 Glossary", 2012)

🛢DBMS: Performance Baseline (Definitions)

 "A set of metrics gathered during a performance analysis process that forms the basis of a performance tuning methodology." (Marilyn Miller-White et al, "MCITP Administrator: Microsoft® SQL Server™ 2005 Optimization and Maintenance 70-444", 2007)

"A baseline is a known starting point for something. In the context of the MCITP Database Developer certification, it's a known starting point for a server. For example, when creating a performance baseline, we would measure the four core resources of a system: CPU, memory, disk, and network. A performance baseline would take a snapshot of the resources (perhaps every 30 minutes) over a period of about a week. Six months later, another counter log could be created, and by comparing it to the baseline, an administrator can identify what has changed." (Darril Gibson, "MCITP SQL Server 2005 Database Developer All-in-One Exam Guide", 2008)

"A baseline measurement is taken to serve as a point of comparison for subsequent measurement." (Laura Sebastian-Coleman, "Measuring Data Quality for Ongoing Improvement", 2012)

"In the context of AWR, the interval between two AWR snapshots that represent the database operating at an optimal level." (Oracle, "Database SQL Tuning Guide Glossary", 2013)

"The beginning point, based on an evaluation of output over a period of time, used to determine the process parameters prior to any improvement effort; the basis against which change is measured." (ASQ)

"Benchmark used as a reference point" (ITIL)


🛢DBMS: Semantic Data Model (Definitions)

"Semantic data model provides a vocabulary for expressing the meaning as well as the structure of database data." (S. Sumathi & S. Esakkirajan, "Fundamentals of Relational Database Management Systems", 2007)

"A design tool for databases that uses concept-level language elements. The main role of semantic models is that they can provide an abstract approach; they are easy to understand and they provide database independence." (László Kovács et al, "Ontology-Based Semantic Models for Databases", 2009) 

"A high level data model. It is usually based on concepts and it uses a graphical formalism. It contains only the key, the semantic properties of the data structure. It does not cover the details of the implementation." (László Kovács & Tanja Sieber, "Multi-Layered Semantic Data Models",  Encyclopedia of Artificial Intelligence, 2009)

"A conceptual data model that provides structure and defines meaning for non-tabular data, making that meaning explicit enough that a human or software agent can reason about it." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"A semantic data model is a conceptual data model with semantic information included." (Michael M David & Lee Fesperman, "Advanced SQL Dynamic Data Modeling and Hierarchical Processing", 2013)

"The first of a series of data models that more closely represented the real world, modeling both data and their relationships in a single structure known as an object. The SDM, published in 1981, was developed by M. Hammer and D. McLeod." (Carlos Coronel & Steven Morris, "Database Systems: Design, Implementation, & Management" 11th  Ed., 2014)

"The development of descriptions and representations of data in such a way that the latter’s meaning is explicit, accurate, and commonly understood by both humans and computer systems." (Panos Alexopoulos, "Semantic Modeling for Data", 2020)

"The semantic data model is a method of structuring data in order to represent it in a specific logical way. It is a conceptual data model that includes semantic information that adds a basic meaning to the data and the relationships that lie between them. This approach to data modeling and data organization allows for the easy development of application programs and also for the easy maintenance of data consistency when data is updated." (Techopedia) [source]

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.