23 July 2009

DBMS: Session (Definition)

"In Oracle, a single connection of an authenticated Oracle user to a database for a period of time. A given user may have several sessions running at the same time. Sessions may be long (as when a developer connects to Oracle via SQL*Plus) or short (as when the Oracle web gateway produces a single web page)." (Bill Pribyl & Steven Feuerstein, "Learning Oracle PL/SQL", 2001)

"In English Query, a sequence of operations performed by the English Query engine. A session begins when a user logs on and ends when the user logs off. All operations during a session form one transaction scope and are subject to permissions determined by the logon username and password." (Anthony Sequeira & Brian Alderman, "The SQL Server 2000 Book", 2003)

"A block of time during which a user interacts with a database." (Jan L Harrington, "SQL Clearly Explained" 3rd Ed., 2010)

"A period of time when a connection is active and communication can take place. For the purpose of data communication between functional units, session also refers to all the activities that take place during the establishment, maintenance, and release of the connection." (Microsoft, "SQL Server 2012 Glossary", 2012)

"A logical or virtual connection between two stations, software programs, or devices on a network that allows the two elements to communicate and exchange data for the duration of the session." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

17 July 2009

DBMS: Checkpoint (Definitions)

"The point at which all data pages that have been changed are guaranteed to have been written to the database device." (Karen Paulsell et al, "Sybase SQL Server: Performance and Tuning Guide", 1996)

"Issued at intervals to run through the transaction log and verify that all committed transactions at that point are physically written back to the database." (Owen Williams, "MCSE TestPrep: SQL Server 6.5 Design and Implementation", 1998)

"An event in which the database engine writes dirty buffer pages to disk. Dirty pages are pages that have been modified, but the modifications have not yet been written to disk. Each checkpoint writes to disk all pages that were dirty at the last checkpoint and still have not been written to disk. Checkpoints occur periodically based on the number of log records generated by data modifications, or when requested by a user or a system shutdown." (Anthony Sequeira & Brian Alderman, "The SQL Server 2000 Book", 2003)

"This is an entry that SQL Server records in a transaction log when it copies transactions from the log to the datafile." (Joseph L Jorden & Dandy Weyn, "MCTS Microsoft SQL Server 2005: Implementation and Maintenance Study Guide - Exam 70-431", 2006)

"An event in which the database engine writes pages that have been modified, but the modifications have not yet been written to disk. Checkpoints can occur periodically based on the number of log records generated by data modifications or when requested by a user or a system shutdown. A point that you can return to directly if the package should fail past that point. Complex packages will often have multiple checkpoints to reduce the amount of work that will need to be redone if a failure occurs." (Victor Isakov et al, "MCITP Administrator: Microsoft SQL Server 2005 Optimization and Maintenance (70-444) Study Guide", 2007)

"An event in which the Database Engine writes dirty buffer pages to disk. Each checkpoint writes to disk all the pages that were dirty at the last checkpoint and still have not been written to disk." (Microsoft, "SQL Server 2012 Glossary", 2012)

"The process of storing a set of data or system state that is in a known consistent or safe state so that it can be later restored by a rollback operation if the system fails or the data is corrupted." (O Sami Saydjari, "Engineering Trustworthy Systems: Get Cybersecurity Design Right the First Time", 2018)

"A point at which the database manager records internal status information in the log; the recovery process uses this information if the subsystem abnormally terminates." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

DBMS: One-to-One Relationship (Definitions)

"A single instance of one entity is associated with a single instance of another entity." (Owen Williams, "MCSE TestPrep: SQL Server 6.5 Design and Implementation", 1998)

"A relationship between two tables in which a single row in the first table can be related only to one row in the second table, and a row in the second table can be related to only one row in the first table. This type of relationship is unusual." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"A relationship type between tables where one row in a given table is related to only one or zero rows in a second table. This relationship type is often used for subtyping. For example, an EMPLOYEE table may hold the information common to all employees, while the FULLTIME, PARTTIME, and CONTRACTOR tables hold information unique to full-time employees, part-time employees, and contractors, respectively. These entities would be considered subtypes of an EMPLOYEE and maintain a one-to-one relationship with the EMPLOYEE table." (Bob Bryla, "Oracle Database Foundations", 2004)

"Occurs when one row or thing of an entity is associated with only one row or thing of another. One-to-one relationships are uncommon in the real world." (Thomas Moore, "EXAM CRAM™ 2: Designing and Implementing Databases with SQL Server 2000 Enterprise Edition", 2005)

"The relationship between two tables dictated by having one record in each table, and not more than one record in either table, related back to the other table." (Gavin Powell, "Beginning Database Design", 2006)

"A relationship between two tables in which a single row in the first table can be related to only one row in the second table, and a row in the second table can be related to only one row in the first table." (S. Sumathi & S. Esakkirajan, "Fundamentals of Relational Database Management Systems", 2007)

"Used in a relational database to denote that a single row in the parent table can be related to only one row in the related child table and that a row in the child table can be related to only a single row in the referenced parent table." (Victor Isakov et al, "MCITP Administrator: Microsoft SQL Server 2005 Optimization and Maintenance (70-444) Study Guide", 2007)

"Occurs when one record in a table corresponds to exactly one record in another table." (Rod Stephens, "Beginning Database Design Solutions", 2008)

"One of three types of relationships (associations among two or more entities) that are used by data models. In a 1:1 relationship, one entity instance is associated with only one instance of the related entity." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management" 9th Ed., 2011)

"A link between two entities in which the cardinality of both sides of the relationship is one." (Craig S Mullins, "Database Administration", 2012)

"A relationship between two entities in a database such that each instance of an entity is related to no more than one instance of the other entity." (Jan L Harrington, "Relational Database Design and Implementation" 3rd Ed., 2009)

16 July 2009

DBMS: Referential Integrity (Definitions)

"The rules governing data consistency, specifically the relationships among the primary keys and foreign keys of different tables. SQL Server addresses referential integrity with user-defined triggers." (Karen Paulsell et al, "Sybase SQL Server: Performance and Tuning Guide", 1996)

"When a table has relationships with other tables, they are linked on a field (or group of fields). Referential integrity ensures that the copy of the key field kept in one table matches the key field in the other." (Owen Williams, "MCSE TestPrep: SQL Server 6.5 Design and Implementation", 1998)

"An integrity mechanism that ensures that vital data in a database, such as the unique identifier for a given piece of data, remains accurate and usable as the database changes. Referential integrity involves managing corresponding data values between tables when the foreign key of a table contains the same values as the primary key of another table." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"Mandatory condition in a data warehouse where all the keys in the fact tables are legitimate foreign keys relative to the dimension tables. In other words, all the fact key components are subsets of the primary keys found in the dimension tables at all times." (Ralph Kimball & Margy Ross, "The Data Warehouse Toolkit 2nd Ed ", 2002)

"A state in which all foreign key values in a database are valid." (Anthony Sequeira & Brian Alderman, "The SQL Server 2000 Book", 2003)

"A method employed by a relational database system that enforces one-to-many relationships between tables." (Bob Bryla, "Oracle Database Foundations", 2004)

"A feature of some database systems that ensures that any record stored in the database is supported by accurate primary and foreign keys." (Sharon Allen & Evan Terry, "Beginning Relational Data Modeling" 2nd Ed., 2005)

"The facility of a DBMS to ensure the validity of predefined relationships." (William H Inmon, "Building the Data Warehouse", 2005)

"A process (usually contained within a relational database model) of validation between related primary and foreign key field values. For example, a foreign key value cannot be added to a table unless the related primary key value exists in the parent table. Similarly, deleting a primary key value necessitates removing all records in subsidiary tables, containing that primary key value in foreign key fields. Additionally, it follows that preventing the deletion of a primary key record is not allowed if a foreign key exists elsewhere." (Gavin Powell, "Beginning Database Design", 2006)

"The assurance that a reference from one entity to another entity is valid. If entity A references entity B, entity B exists. If entity B is removed, all references to entity B must also be removed." (Pramod J Sadalage & Scott W Ambler, "Refactoring Databases: Evolutionary Database Design", 2006)

"Relational database integrity that dictates that all foreign key values in a child table must have a corresponding matching primary key value in the parent table." (Marilyn Miller-White et al, "MCITP Administrator: Microsoft SQL Server 2005 Optimization and Maintenance 70-444", 2007)

"The referential integrity imposes the constraint that if a foreign key exists in a relation, either the foreign key value must match a candidate key value of some tuple in its home relation or the foreign key value must be wholly null." (S. Sumathi & S. Esakkirajan, "Fundamentals of Relational Database Management Systems", 2007)

"A set of rules, enforced by the database server, the user’s application, or both, that protects the quality and consistency of information stored in the database." (Robert D Schneider & Darril Gibson, "Microsoft SQL Server 2008 All-in-One Desk Reference For Dummies", 2008)

"Requires that relationships among tables be consistent. For example, foreign key constraints must be satisfied. You cannot accept a transaction until referential integrity is satisfied." (Rod Stephens, "Beginning Database Design Solutions", 2008)

"A constraint on a relation that states that every non-null foreign key value must match an existing primary key value." (Jan L Harrington, "Relational Database Design and Implementation" 3rd Ed., 2009)

"A constraint in a SQL database that requires, for every foreign key instance that exists in a table, that the row (and thus the primary key instance) of the parent table associated with that foreign key instance must also exist in the database." (Toby J Teorey, ", Database Modeling and Design" 4th Ed, 2010)

"A constraint on a relation that states that every non-null foreign key value must reference an existing primary key value." (Jan L Harrington, "SQL Clearly Explained" 3rd Ed., 2010)

"In a relational database, the quality of a table that all its associations are with real instances of other tables." (David C Hay, "Data Model Patterns: A Metadata Map", 2010)

"Refers to two relational tables that are directly related. Referential integrity between related tables is established if non-null values in the foreign key field of the child table are primary key values in the parent table." (Paulraj Ponniah, "Data Warehousing Fundamentals for IT Professionals", 2010)

"A condition by which a dependent table’s foreign key must have either a null entry or a matching entry in the related table. Even though an attribute may not have a corresponding attribute, it is impossible to have an invalid entry." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management" 9th Ed, 2011)

"In data management, constraints that govern the relationship of an occurrence of one entity to one or more occurrences of another entity. These constraints may be automatically enforced by the DBMS. For instance, every purchase order must have one and only one customer. If the relationship is represented using a foreign key, then the foreign key is said to reference a file or entity table where the identifier is from the same domain. Having referential integrity means that IF a value exists in the foreign key of the referencing file, then it must exist as a valid identifier in the referenced file or table." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"Through the specification of appropriate referential constraints, RI guarantees that an acceptable value is always in each foreign key column." (Craig S Mullins, "Database Administration", 2012)

"Refers to the accuracy and consistency of records, and the assurance that they are genuine and unaltered." (Robert F Smallwood, "Information Governance: Concepts, Strategies, and Best Practices", 2014)

"The process of relating data together in a disciplined manner" (Daniel Linstedt & W H Inmon, "Data Architecture: A Primer for the Data Scientist", 2014)

"A requirement that the data in related tables be matched, so that an entry in the 'many' side of the relationship (the foreign key) must have a corresponding entry in the “one” side of the relationship (the primary key)." (Faithe Wempen, "Computing Fundamentals: Introduction to Computers", 2015)

"Refers to the accuracy and consistency of records, and the assurance that they are genuine and unaltered." (Robert F Smallwood, "Information Governance for Healthcare Professionals", 2018)

"The state of a database in which all values of all foreign keys are valid. Maintaining referential integrity requires the enforcement of a referential constraint on all operations that change the data in a table where the referential constraints are defined." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

"A rule defined on a key in one table that guarantees that the values in that key match the values in a key in a related table (the referenced value)." (Oracle, "Oracle Database Concepts")

"A state in which all foreign key values in a database are valid. For a foreign key to be valid, it must contain either the value NULL, or an existing key value from the primary or unique key columns referenced by the foreign key." (Microsoft Technet)

"The technique of maintaining data always in a consistent format, part of the ACID philosophy. In particular, data in different tables is kept consistent through the use of foreign key constraints, which can prevent changes from happening or automatically propagate those changes to all related tables. Related mechanisms include the unique constraint, which prevents duplicate values from being inserted by mistake, and the NOT NULL constraint, which prevents blank values from being inserted by mistake." (MySQL, "MySQL 8.0 Reference Manual Glossary")

DBMS: Online Analytical Processing (Definitions)

"A technology that uses multidimensional structures to provide rapid access to data for analysis. The source data for OLAP is commonly stored in data warehouses in a relational database." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

[analytical processing:] "A general term that encompasses data warehousing and OLAP. Analytical processing produces information for management decisions. Contrasts with operational processing." (Microsoft Corporation, "Microsoft SQL Server 7.0 Data Warehouse Training Kit", 2000)

"The capability to view data in different ways, organizing the data by various dimensions to perform analysis, query, and reporting interactively." (Margaret Y Chu, "Blissful Data ", 2004)

[analytical processing:] "using the computer to produce an analysis for management decision, usually involving trend analysis, drill-down analysis, demographic analysis, profiling, and so forth." (William H Inmon, "Building the Data Warehouse", 2005)

"A database designed to support analysis for decision making in an organization." (Reed Jacobsen & Stacia Misner, "Microsoft SQL Server 2005 Analysis Services Step by Step", 2006)

"The ability for a user to 'drill down' on various data attributes in order to gain a more detailed view of the data. Such analysis enables a user to view different perspectives of the same data in order to facilitate decision making. OLAP is part of the broader category of business intelligence." (Jill Dyché & Evan Levy, "Customer Data Integration: Reaching a Single Version of the Truth", 2006)

"Tools that provide different ways of summarizing multidimensional data." (Glenn J Myatt, "Making Sense of Data: A Practical Guide to Exploratory Data Analysis and Data Mining", 2006)

"Process whereby raw data is stored in a multidimensional format so that it can be analyzed easily by decision-makers." (Sara Morganand & Tobias Thernstrom , "MCITP Self-Paced Training Kit : Designing and Optimizing Data Access by Using Microsoft SQL Server 2005 - Exam 70-442", 2007)

"A data mining approach for performing multi-dimensional queries." (Rod Stephens, "Beginning Database Design Solutions", 2008)

"A collection of common business analysis functions that are difficult to perform directly with SQL. Some of the specific functions that fall under the OLAP umbrella include time series comparisons, ranking, ratios, penetration, thresholds, and contribution to report or to the whole data population. Most business intelligence tools provide this type of functionality. The capabilities can be implemented in a variety of different data storage mechanisms." (Laura Reeves, "A Manager's Guide to Data Warehousing", 2009)

"A query service that overlays a data warehouse by creating and maintaining a set of summary views (automatic summary tables, or ASTs) to enable quick access to summary data." (Toby J Teorey, "Database Modeling and Design 4th Ed", 2010)

"An approach to database design that focuses on analytical activities such as viewing data in various aggregations, slicing and dicing data to meet different criteria, and grouping data." (Ken Withee, "Microsoft Business Intelligence For Dummies", 2010)

"An approach to quickly answer multidimensional analytical queries." (Martin Oberhofer et al, "The Art of Enterprise Information Architecture", 2010)

"Systems that contain read-only data that can be queried and analyzed much more efficiently than OLTP application databases." (Linda Volonino & Efraim Turban, "Information Technology for Management 8th Ed", 2011)

"A type of computer processing that provides analysis of data stored in a database. OLAP tools enable users to analyze different dimensions of multidimensional data." (Craig S Mullins, "Database Administration", 2012)

"This technique for analyzing business data uses cubes, which are like multidimensional pivot tables in spreadsheets. OLAP tools can perform trend analysis and enable drilling down into data. They enable multidimensional analysis, such as analyzing by time, product, and geography." (Daniel Linstedt & W H Inmon, "Data Architecture: A Primer for the Data Scientist", 2014)

"OLAP is software for manipulating multidimensional data from a variety of sources that has been stored in a data warehouse. The software can create various views and representations of the data. OLAP software provides fast, consistent, interactive access to shared, multidimensional data." (Ciara Heavin & Daniel J Power, "Decision Support, Analytics, and Business Intelligence" 3rd Ed., 2017)

"The process of collecting data from one or many sources; transforming and analyzing the consolidated data quickly and interactively; and examining the results across different dimensions of the data by looking for patterns, trends, and exceptions within complex relationships of that data." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

15 July 2009

DBMS: Online Transaction Processing (Definitions)

"A database management system representing the state of a particular business function at a specific point in time. An OLTP database is typically characterized by having large numbers of concurrent users actively adding and modifying data." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"A data processing system designed to record all of the business transactions of an organization as they occur. An OLTP system is characterized by many concurrent users actively adding and modifying data." (Anthony Sequeira & Brian Alderman, "The SQL Server 2000 Book", 2003)

"Any software capability that applies transactional updates and inquiries interactively." (Margaret Y Chu, "Blissful Data", 2004)

"A relational database system used to manage the day-to-day operations of an organization." (Reed Jacobsen & Stacia Misner, "Microsoft SQL Server 2005 Analysis Services Step by Step", 2006)

"The operational processes for executing a business activity while the customer or end user waits for the execution to complete. One example of OLTP would be an automated teller transaction." (Evan Levy & Jill Dyché, "Customer Data Integration", 2006)

"A data-processing system designed to record all the business transactions of an organization as they occur. An OLTP system is characterized by many concurrent users actively adding and modifying data. Typically, OLTP systems perform large numbers of relatively small transactions." (Jim Joseph et al, "Microsoft® SQL Server™ 2008 Reporting Services Unleashed", 2009)

"Online transaction processing (OLTP) systems are the fundamental systems used to run the business. These are also called operational systems or operational applications. They are often used as sources of data for the data warehouse." (Laura Reeves, "A Manager's Guide to Data Warehousing", 2009)

"A data-processing system designed to record all the business transactions of an organization as they occur. An OLTP system is characterized by many concurrent users actively adding and modifying data. Typically, OLTP systems perform large numbers of relatively small transactions." (Jim Joseph, "Microsoft SQL Server 2008 Reporting Services Unleashed", 2009)

"An approach to database design that focuses on data transactions  in particular inserting, updating, and deleting data." (Ken Withee, "Microsoft Business Intelligence For Dummies", 2010)

"Class of systems that facilitate and manage transaction-oriented applications." (Martin Oberhofer et al, "The Art of Enterprise Information Architecture", 2010)

"A transaction processing system where transactions are executed as soon as they occur." (Linda Volonino & Efraim Turban, "Information Technology for Management 8th Ed", 2011)

"A type of computer processing in which the computer responds immediately to user requests. Each request is a transaction. The opposite of transaction processing is batch processing." (Craig S Mullins, "Database Administration", 2012)

"A type of interactive application in which requests that are submitted by users are processed as soon as they are received. Results are returned to the requester in a relatively short period of time." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

Online transaction processing (OLTP) is a mode of processing that is characterized by short transactions recording business events and that normally requires high availability and consistent, short response times. This category of applications requires that a request for service be answered within a predictable period that approaches 'real time'. Unlike traditional mainframe data processing, in which data is processed only at specific times, transaction processing puts terminals online, where they can update the database instantly to reflect changes as they occur. In other words, the data processing models the actual business in real time, and a transaction transforms this model from one business state to another. Tasks such as making reservations, scheduling and inventory control are especially complex; all the information must be current. (Gartner)

DBMS: Many-to-Many Relationship (Definitions)

"Multiple instances of one entity are associated with one or more instances of another entity." (Owen Williams, "MCSE TestPrep: SQL Server 6.5 Design and Implementation", 1998)

"A relationship between two tables in which rows in each table have multiple matching rows in the related table. Many-to-many relationships are maintained by using a third table called a junction table." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"A logical data relationship in which the value of one data element can exist in combination with many values of another data element, and vice versa." (Ralph Kimball & Margy Ross, "The Data Warehouse Toolkit" 2nd Ed., 2002)

"A relationship type between tables in a relational database where one row of a given table may be related to many rows of another table, and vice versa. Many-to-many relationships are often resolved with an intermediate associative table." (Bob Bryla, "Oracle Database Foundations", 2004)

"This type of relationship occurs when many rows or things in an entity (many instances of an entity) are associated with many rows or things in another entity. This type of relationship is not uncommon in the real world. SQL Server doesn't actually allow direct implementation of many-to-many relationships; nevertheless, you can do so by creating two one-to-many relationships to a new entity." (Thomas Moore, "EXAM CRAM™ 2: Designing and Implementing Databases with SQL Server 2000 Enterprise Edition", 2005)

"A relationship where one object of one type may correspond to many objects of another type and vice versa. For example, one COURSE may include many STUDENTs and one STUDENT may be enrolled in many COURSEs. Normally you implement this kind of relationship by using an intermediate table that has one-to-many relationships with the original tables." (Rod Stephens, "Beginning Database Design Solutions", 2008)

"A relationship between two entities in a database such that each instance of the first entity can be related to many instances of the second and each instance of the second entity can be related to many instances of the first." (Jan L Harrington, "Relational Database Design and Implementation" 3rd Ed., 2009)

"A relationship where an occurrence of each entity class may be associated with one or more occurrences of the other entity class." (David C Hay, "Data Model Patterns: A Metadata Map", 2010)

"One of three types of relationships (associations among two or more entities) in which one occurrence of an entity is associated with many occurrences of a related entity and one occurrence of the related entity is associated with many occurrences of the first entity." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management" 9th Ed., 2011)

"A link between two entities in which the cardinality of both sides of the relationship is multiple." (Craig S Mullins, "Database Administration: The Complete Guide to DBA Practices and Procedures", 2012)

"A relationship between two tables in which one row in one table can relate to many rows in another table." (Microsoft, "SQL Server 2012 Glossary", 2012)

13 July 2009

DBMS: Restore (Definitions)

"To restore an entire database and transaction log, database file(s), or a transaction log from a backup." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"This is the process of bringing a database back to a stable condition after a disaster." (Joseph L Jorden & Dandy Weyn, "MCTS Microsoft SQL Server 2005: Implementation and Maintenance Study Guide - Exam 70-431", 2006)

[point-in-time restore:] "Recovering only the transactions within a log backup committed before a specific point in time, instead of recovering the entire backup." (Victor Isakov et al, "MCITP Administrator: Microsoft SQL Server 2005 Optimization and Maintenance (70-444) Study Guide", 2007)

"The process of reinstating archived information onto your database server." (Robert D. Schneider and Darril Gibson, "Microsoft SQL Server 2008 All-In-One Desk Reference For Dummies", 2008)

"A multi-phase process that copies all the data and log pages from a specified backup to a specified database (the data-copy phase) and rolls forward all the transactions that are logged in the backup (the redo phase). At this point, by default, a restore rolls back any incomplete transactions (the undo phase), which completes the recovery of the database and makes it available to users." (Microsoft, "SQL Server 2012 Glossary", 2012)

"To rebuild a damaged or corrupted database or table space from a backup image produced with the backup database utility." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

09 July 2009

DBMS: Rollback (Definitions)

"A Transact-SQL statement used with a user-defined transaction (before a commit transaction has been received) that cancels the transaction and undoes any changes that were made to the database." (Karen Paulsell et al, "Sybase SQL Server: Performance and Tuning Guide", 1996)

"Rollback of a user-specified transaction to the last savepoint inside a transaction or to the beginning of a transaction." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"Terminates a transaction so that all resources updated within a transaction revert to the original state before the transaction started." (Atul Apte, "Java Connector Architecture: Building Custom Connectors and Adapters", 2002)

"The point in a transaction when all updates to any resources involved in the transaction are reversed." (Kim Haase et al, "The J2EE Tutorial", 2002)

"To remove the updates performed by one or more partially completed transactions. Rollbacks are required to restore the integrity of a database after an application, database, or system failure." (Anthony Sequeira & Brian Alderman, "The SQL Server 2000 Book", 2003)

"Rolling back a transaction means you are undoing the actual transaction you are currently in." (Joseph L Jorden & Dandy Weyn, "MCTS Microsoft SQL Server 2005: Implementation and Maintenance Study Guide - Exam 70-431", 2006)

"This command undoes any database changes not yet committed to the database using the COMMIT command." (Gavin Powell, "Beginning Database Design", 2006)

"A DBMS recovery technique that aborts active applications and attempts to reinstate the state of the database prior to initiating the applications active at the time the database failed." (S. Sumathi & S. Esakkirajan, "Fundamentals of Relational Database Management Systems", 2007)

"A process that reverts writes operations to ensure the consistency of all replica set members." (MongoDb, "Glossary", 2008)

"Undoes changes performed within a transaction before the transaction is committed." (Rod Stephens, "Beginning Database Design Solutions", 2008)

"An operation that returns the database to a previous state. The transaction can be rolled back completely, canceling a pending transaction, or to a specified point. Rollbacks allow the database to be restored to a valid state if invalid operations are performed or after the database server fails." (John Goodson & Robert A Steward, "The Data Access Handbook", 2009)

"A SQL command that restores the database table contents to their original condition (the condition that existed after the last COMMIT statement)." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management" 9th Ed., 2011)

"To undo the database statements performed prior to a commit of the transaction." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"An operation that ends a current transaction and cancels all the recent changes to the database until the previous checkpoint/ commit point." (Adam Gordon, "Official (ISC)2 Guide to the CISSP CBK" 4th Ed., 2015)

"The process of restoring a set of data or process state to a previous consistent state saved through a checkpoint operation." (O Sami Saydjari, "Engineering Trustworthy Systems: Get Cybersecurity Design Right the First Time", 2018)

DBMS: Fifth Normal Form (Definitions)

"A table is in 5NF if it is in 4NF and contains no related multi-valued dependencies." (Rod Stephens, "Beginning Database Design Solutions", 2008)

"A table is in fifth normal form (5NF) if and only if there are no possible lossless decompositions into any subset of tables; in other words, if there is no possible lossless decomposition, then the table is in 5NF." (Toby J Teorey, ", Database Modeling and Design" 4th Ed., 2010)

"Specifies that every join dependency for the entity must be a consequence of its candidate keys." (Craig S Mullins, "Database Administration", 2012)

08 July 2009

DBMS: Fourth Normal Form (Definitions)

"A relation is in fourth normal form if it is in BCNF and contains no multivalued dependencies." (S. Sumathi & S. Esakkirajan, "Fundamentals of Relational Database Management Systems", 2007)

"A table is in 4NF if it is in BCNF and contains no unrelated multi-valued dependencies." (Rod Stephens, "Beginning Database Design Solutions", 2008)

"A table is in fourth normal form (4NF) if and only if it is at least in BCNF and if whenever there exists a nontrivial multivalued dependency of the form X->>Y, then X must be a superkey in the table." (Toby J Teorey, ", Database Modeling and Design 4th Ed", 2010)

"In relational theory, the fourth of Dr. Codd’s constraints on a relational design: No column within a primary key may be completely dependent on another column within the same primary key." (David C Hay, "Data Model Patterns: A Metadata Map", 2010)

"A table is in 4NF when it is in 3NF and contains no multiple independent sets of multivalued dependencies." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management" 9th Ed., 2011)

"An entity is in fourth normal form (4NF) if and only if it is in 3NF and has no multiple sets of multivalued dependencies." (Craig S Mullins, "Database Administration", 2012)

07 July 2009

DBMS: Third Normal Form (Definitions)

"Table data that complies with both the first and second normal forms and directly relates to each rows primary key. See also first normal form; second normal form." (Robert D Schneider & Darril Gibson, "Microsoft SQL Server 2008 All-in-One Desk Reference For Dummies", 2008)

"In relational theory, the third of Dr. Codd’s constraints on a relational design: Each attribute must depend only on the primary key." (David C Hay, "Data Model Patterns: A Metadata Map", 2010)

"Database design approach that eliminates redundancy and therefore facilitates insertion of new rows into tables in an OLTP application without introducing excessive data locking problems.  Sometimes referred to as normalized." (Ralph Kimball & Margy Ross, "The Data Warehouse Toolkit" 2nd Ed., 2002)

"A level of normalization in which all attributes in a table are fully dependent on its entire key. Third normal form is widely accepted as the optimal design for a transaction system. A schema in third normal form is often referred to as fully normalized, although there are actually additional degrees of normalization possible." (Christopher Adamson, "Mastering Data Warehouse Aggregates", 2006)

"A table is in 3NF if it is in 2NF and it contains no transitive dependencies." (Rod Stephens, "Beginning Database Design Solutions", 2008)

"A table is in third normal form (3NF) if and only if for every functional dependency X->A, where X and A are either simple or composite attributes (data items), either X must be a superkey or A must be a member attribute of a candidate key in that table." (Toby J Teorey, ", Database Modeling and Design 4th Ed", 2010)

"A table is in 3NF when it is in 2NF and no non-key attribute is functionally dependent on another non-key attribute; that is, it cannot include transitive dependencies." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management 9th Ed", 2011)

"An entity is in third normal form if and only if it is in second normal form and every non-key attribute is non-transitively dependent on the primary key." (Craig S Mullins, "Database Administration", 2012)

DBMS: Backup (Definitions)

"(1) A system, component, file, procedure, or person available to replace or help restore a primary item in the event of a failure or externally caused disaster. (2) To create or designate a system, component, file, procedure, or person as in (1)." (IEEE, "IEEE Standard Glossary of Software Engineering Terminology", 1990)

"A copy of a database or transaction log, used to recover from a media failure." (Karen Paulsell et al, "Sybase SQL Server: Performance and Tuning Guide", 1996)

"A copy of a database, transaction log, file, or filegroup. Use this object to recover data after a system failure." (Anthony Sequeira & Brian Alderman, "The SQL Server 2000 Book", 2003)

"A spare copy of a file or files that have been created in case the original data is damaged or lost." (Andy Walker, "Absolute Beginner’s Guide To: Security, Spam, Spyware & Viruses", 2005)

"Making copies of data to a device other than the original data store." (Tom Petrocelli, "Data Protection and Information Lifecycle Management", 2005)

"This is a copy of a database that can be used to bring the database back to a stable condition in the event of a disaster." (Joseph L Jorden & Dandy Weyn, "MCTS Microsoft SQL Server 2005: Implementation and Maintenance Study Guide - Exam 70-431", 2006)

"The process of copying your database’s information to another form of media, such as tape or disk. A good backup strategy is vital for any production SQL Server environment." (Robert D Schneider & Darril Gibson, "Microsoft SQL Server 2008 All-in-One Desk Reference For Dummies", 2008)

"(1) The process of making a copy of data from a database to ensure its continued availability in the event of a hardware or software failure requiring recovery of the database to restore the data. (2) The copy itself." (Craig S Mullins, "Database Administration", 2012)

"A duplicate of a program, a disk, or data, made either for archiving purposes or for safeguarding files." (Microsoft, "SQL Server 2012 Glossary", 2012)

"A utility that copies databases, files, or subsets of databases and files to a storage medium. This copy can be used to restore the data in case of system failure." (Marcia Kaufman et al, "Big Data For Dummies", 2013)

"A complete spare copy of data for purposes of disaster recovery. Backups are nonindexed mass storage and cannot substitute for indexed, archived information that can be quickly searched and retrieved (as in archiving)." (Robert F Smallwood, "Information Governance: Concepts, Strategies, and Best Practices", 2014)

"A copy of a database or table space that can be stored on a different medium and used to restore the database or table space in the event of failure or damage to the original." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

"Copying data to protect against loss of integrity or availability of the original." (ITIL)

06 July 2009

DBMS: Transaction Log Backup (Definitions)

 "A backup of the transaction log that flushes the transactions from the transaction log to a file. To have transaction log backup integrity, each consecutive file must not break the LSN chain." (Allan Hirt et al, "Microsoft SQL Server 2000 High Availability", 2004)

"A backup of transaction logs that includes all log records not backed up in previous log backups. Log backups are required under the full and bulk-logged recovery models and are unavailable under the simple recovery model." (SQL Server 2012 Glossary, "Microsoft", 2012)

"This type of backup makes a copy of all transactions in the transaction log, and it can clear all the inactive transactions from the log, thus giving the log more space to hold new transactions." (Joseph L Jorden & Dandy Weyn, "MCTS Microsoft SQL Server 2005: Implementation and Maintenance Study Guide - Exam 70-431", 2006)

"A backup of transaction logs that includes all log records not backed up in previous log backups. Log backups are required under the full and bulk-logged recovery models and are unavailable under the simple recovery model." (Microsoft, "SQL Server 2012 Glossary", 2012)

"Special database backups that contain a sequential record of all data modifications that have occurred within a database. Transaction log backups can be used to perform point-in-time recovery. See also point-in-time recovery." (Mark Rhodes-Ousley, "Information Security: The Complete Reference, Second Edition, 2nd Ed.", 2013)

DBMS: Full Backup (Databases)

"A complete point-in-time backup of a database." (Allan Hirt et al, "Microsoft SQL Server 2000 High Availability", 2004)

"A backup of the entire hard drive or array." (Tom Petrocelli, "Data Protection and Information Lifecycle Management", 2005)

"This is a type of backup that backs up the entire database, but not the transaction logs." (Joseph L Jorden & Dandy Weyn, "MCTS Microsoft SQL Server 2005: Implementation and Maintenance Study Guide - Exam 70-431", 2006)

"A backup of the entire database that includes the database files, the locations of those files, and the portions of the transaction log (from the LSN recorded at the start of the backup to the LSN at the end of the backup). This is the first type of backup you will need to do in any backup strategy because all the other backup types depend on the existence of a full backup. A full backup is sometimes called a baseline in a backup strategy." (Marilyn Miller-White et al, "MCITP Administrator: Microsoft® SQL Server™ 2005 Optimization and Maintenance 70-444", 2007)

"A full backup backs up the complete database. This includes all data, all objects, and all files. A full backup also backs up the transaction log, but does not truncate it. Both differential and transaction log backups need to have a full backup done first." (Darril Gibson, "MCITP SQL Server 2005 Database Developer All-in-One Exam Guide", 2008)

"As its name implies, this type of backup archives all information within a database. Should the database be lost or damaged, you can restore it to its state as of the time you created the full backup. See also full differential backup; partial backup; restore." (Robert D Schneider & Darril Gibson, "Microsoft SQL Server 2008 All-in-One Desk Reference For Dummies", 2008)

"A backup of an entire database." (SQL Server 2012 Glossary, "Microsoft", 2012)

"A backup operation that backs up all files and sets their archive attribute to Off." (Faithe Wempen, "Computing Fundamentals: Introduction to Computers", 2015)

05 July 2009

DBMS: First Normal Form (Definitions)

"Eliminate repeating groups, such that all records in all tables can be identified uniquely, by a primary key in each table. In other words, all fields other than the primary key must depend on the primary key." (Gavin Powell, "Beginning Database Design", 2006)

"A relation is in first normal form if it contains no repeating groups." (S. Sumathi & S. Esakkirajan, "Fundamentals of Relational Database Management Systems", 2007)

"One of the three normal forms that make up relational database guidelines, this rule states that a table should not have any repeating fields." (Robert D Schneider and Darril Gibson, "Microsoft SQL Server 2008 All-In-One Desk Reference For Dummies", 2008)

"A table is in 1NF if it satisfies basic conditions to be a relational table." (Rod Stephens, "Beginning Database Design Solutions", 2008)

"A table is in first normal form (1NF) if and only if there are no repeating columns of data taken from the same domain and having the same meaning." (Toby J Teorey, ", Database Modeling and Design 4th Ed", 2010)

"In relational theory, the first of Dr. Codd’s constraints on a relational design: Every tuple may have only one value for an attribute in a relation." (David C Hay, "Data Model Patterns: A Metadata Map", 2010)

"The first stage in the normalization process. It describes a relation depicted in tabular format, with no repeating groups and with a primary key identified." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management" 9th Ed., 2011)

"An entity is in first normal form if and only if all underlying domains contain atomic values only." (Craig S Mullins, "Database Administration", 2012)

04 July 2009

DBMS: Incremental Backup (Definitions)

"Backups that only copy objects that have changed since the last backup." (Tom Petrocelli, "Data Protection and Information Lifecycle Management", 2005)

"A database backup containing only the data that has changed since the last full backup or incremental copy was made." (Craig S Mullins, "Database Administration", 2012)

"A backup that saves files that have changed since the last backup. When data is backed up, the archive bit on a file is turned off, and when changes are made to the file, the archive bit is set again. An incremental backup uses this information to only back up files that have changed since the last backup. An incremental backup turns the archive bit off again, and the next incremental backup backs up only the files that have changed since the last incremental backup. This sort of backup saves time, but it means that the restore process will involve restoring the last full backup and every incremental backup made after it." (Mark Rhodes-Ousley, "Information Security: The Complete Reference, Second Edition, 2nd Ed.", 2013)

"A backup operation that backs up all files that have the archive attribute set to On and then sets the attribute to Off." (Faithe Wempen, "Computing Fundamentals: Introduction to Computers", 2015)

"A copy of all database data that has changed since the most recent successful full backup operation. An incremental backup is also known as a cumulative backup image because each incremental backup includes the contents of the previous incremental backup." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

03 July 2009

DBMS: Differential Backup (Definitions)

"A database backup that records only pages that have changed in the database since the last full database backup. A differential backup is smaller and faster to restore than a full backup and has minimal effect on performance." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"This type of database backup records only those changes made to the database since the last full database backup. A differential backup is smaller, and is faster to restore than a full backup and has minimal effect on performance." (Anthony Sequeira & Brian Alderman, "The SQL Server 2000 Book", 2003)

"A type of database backup that only backs up changes made to the database since the last full database backup." (Allan Hirt et al, "Microsoft SQL Server 2000 High Availability", 2004)

"This is a type of backup that backs up changes to the database only since the last full backup was made." (Joseph L Jorden & Dandy Weyn, "MCTS Microsoft SQL Server 2005: Implementation and Maintenance Study Guide - Exam 70-431", 2006)

"A backup type that backs up all the changes since the last full backup. Since the differential backup only backs up the changes, it can be done much quicker than a full backup. A possible backup strategy might include performing a full backup once a week and doing differential backups daily." (Darril Gibson, "MCITP SQL Server 2005 Database Developer All-in-One Exam Guide", 2008)

"A backup containing only changes made to the database since the preceding data backup on which the differential backup is based." (Microsoft, "SQL Server 2012 Glossary", 2012)

"A database backup operation that copies only the database pages that have been modified since the last full database backup." (Mark Rhodes-Ousley, "Information Security: The Complete Reference, Second Edition" 2nd Ed., 2013)

"Like an incremental backup, but only backs up files with the archive bit set—files that have changed since the last backup. Unlike the incremental backup, however, it does not reset the archive bit. Each differential backup backs up all files that have changed since the last backup that reset the bits. Using this strategy, a full backup is followed by differential backups. A restore consists of restoring the full backup and then only the last differential backup made." (Mark Rhodes-Ousley, "Information Security: The Complete Reference, Second Edition, 2nd Ed.", 2013)

"A backup operation that backs up all files that have the archive attribute set to On but does not change that attribute." (Faithe Wempen, "Computing Fundamentals: Introduction to Computers", 2015)

[delta backup:] "A copy of all database data that has changed since the last successful backup (full, incremental, or delta) of the table space in question. A delta backup is also known as a differential, or noncumulative, backup image. The predecessor of a delta backup image is the most recent successful backup that contains a copy of each of the table spaces in the delta backup image." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

DBMS: Cardinality (Definitions)

 "The classification of a relationship; for example, one-to-many, many-to-many, and so on." (Owen Williams, "MCSE TestPrep: SQL Server 6.5 Design and Implementation", 1998)

"The number of tuples (rows) in a relationship. For example, a relationship can be one-to-one, one-to-many, or many-to-many." (Microsoft Corporation, "Microsoft SQL Server 7.0 Data Warehouse Training Kit", 2000)

"The number of unique values for a given column in a relational table. Low cardinality refers to a limited number of values, relative to the overall number of rows in the table." (Ralph Kimball & Margy Ross, "The Data Warehouse Toolkit 2nd Ed ", 2002)

"Cardinality denotes the maximum number of occurrences of one entity that can be related to another entity. Usually, these are expressed as “one” or “many.” Change Data Capture Change data capture is a technique for propagating only changes to source data through the data acquisition process." (Claudia Imhoff et al, "Mastering Data Warehouse Design", 2003)

"The number of distinct values in a column of a table." (Bob Bryla, "Oracle Database Foundations", 2004)

"The cardinality of a relationship represents the number of occurrences between entities. An entity with a cardinality of one is called a parent entity, and an entity with a cardinality of one or more is called a child entity." (Sharon Allen & Evan Terry, "Beginning Relational Data Modeling" 2nd Ed., 2005)

"The number of distinct values taken on by an attribute." (Christopher Adamson, "Mastering Data Warehouse Aggregates", 2006)

"The number of tuples in a relation." (S. Sumathi & S. Esakkirajan, "Fundamentals of Relational Database Management Systems", 2007)

"A representation of the minimum and maximum allowed number of values for an attribute. In semantic object models, written as L.U where L and U are the lower and upper bounds. For example, 1.10 means an attribute must occur between 1 and 10 times." (Rod Stephens, "Beginning Database Design Solutions", 2008)

"A relationship in a data model denoting how many instances of one entity class can be related to an instance of another entity class - zero, one, or many." (Danette McGilvray, "Executing Data Quality Projects", 2008)

"The measure of the number of elements within a set of values. For example, the set A = { 2, 4, 6 } contains 3 elements, and has a cardinality of 3." (MongoDb, "Glossary", 2008)

"In relationships, the characteristic  of a relationship that specifies the upper and lower bounds of how many instances of one entity or object type can be related to each instance of the same or some other entity or object type. Cardinality is separately specified at each end of the relationship. At each end the choices are 0, 1, or M. Combining the cardinality at both ends of a binary relationship, yields 3 x 9 - 1 = 8 possibilities (0:0 is not a valid option)." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"The number of entities or members in a set." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"The number of entities that can exist on each side of a relationship." (Microsoft, "SQL Server 2012 Glossary", 2012)

"The number of occurrences that may exist between a pair of entities. Another way of looking at cardinality is as the number of entity occurrences applicable to a specific relationship. Sometimes the term degree is used instead of cardinality. An alternate usage of the term cardinality within the realm of database administration is a database statistic used by the relational optimizer defining the number of occurrences of a value within a column (or set of columns)." (Craig S Mullins, "Database Administration", 2012)

"The number of rows that is expected to be or is returned by an operation in an execution plan. Data has low cardinality when the number of distinct values in a column is low in relation to the total number of rows." (Oracle, "Database SQL Tuning Guide Glossary", 2013)

"The number of occurrences of two units of data that participate in a relationship" (Daniel Linstedt & W H Inmon, "Data Architecture: A Primer for the Data Scientist", 2014)

"The cardinality of a relationship is the number of instances that can be associated with each entity type in a relationship." (Robert J Glushko, "The Discipline of Organizing: Professional Edition, 4th Ed", 2016)

"The number of rows in a database table or the number of elements in an array. See also associative array." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

DBMS: Second Normal Form (Definitions)

"The second normal form (2NF) requires that all attributes be dependent on the whole key. To attain 2NF, the entity must be in 1NF and every nonprimary attribute must be dependent on the entire primary key for its existence. 2NF further reduces possible redundancy in the data model by removing attributes that are dependent on part of the key and placing them in their own entity." (Claudia Imhoff et al, "Mastering Data Warehouse Design", 2003)

"A relation schema R is in 2NF if every nonprime attribute A in R is fully functionally dependent on the primary key of R." (S. Sumathi & S. Esakkirajan, "Fundamentals of Relational Database Management Systems", 2007)

"A table is in 2NF if it is in 1NF and every field that is not part of the primary key depends on every part of the primary key." (Rod Stephens, "Beginning Database Design Solutions", 2008)

"Data is said to be in the second normal form if it complies with the first normal form and has one or more columns in a table that uniquely identify each row." (Robert D. Schneider and Darril Gibson, "Microsoft SQL Server 2008 All-In-One Desk Reference For Dummies", 2008)

"A table is in second normal form (2NF) if and only if each non-key attribute (data item) is fully dependent on the primary key, that is either the left side of every functional dependency (FD) is a primary key or can be derived from a primary key." (Toby J Teorey, ", Database Modeling and Design" 4th Ed, 2010)

"In relational theory, the second of Dr. Codd’s constraints on a relational design: Each attribute must depend on the entire primary key." (David C Hay, "Data Model Patterns: A Metadata Map", 2010)

"The second stage in the normalization process in which a relation is in 1NF and there are no partial dependencies (dependencies in only part of the primary key)." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management" 9th Ed., 2011)

"An entity is in second normal form if and only if it is in first normal form and every non-key attribute is fully dependent on the key." (Craig S Mullins, "Database Administration", 2012)

"The second level of normalization for a table in a relational database. A table is in 2NF if:" (Rod Stephens, "Beginning Software Engineering", 2015)

02 July 2009

DBMS: Denormalization (Definitions)

"The technique of placing data often accessed/used together in a physical location that optimizes the performance of the system." (Margaret Y Chu, "Blissful Data ", 2004)

"An intentional violation of the rules of normalization done to increase performance of a database. It typically occurs in varying degrees during all phases of physically implementing a database. Database designs are often denormalized to accomplish a specific performance-related goal. Denormalization can’t be done without a thorough understanding of the data and the needs of the customer." (Sharon Allen & Evan Terry, "Beginning Relational Data Modeling" 2nd Ed., 2005)

"The process of adding planned redundancy to an already fully normalized data model." (Thomas Moore, "EXAM CRAM™ 2: Designing and Implementing Databases with SQL Server 2000 Enterprise Edition", 2005)

"The technique of placing normalized data in a physical location that optimizes the performance of the system." (William H Inmon, "Building the Data Warehouse", 2005)

"Most often the opposite of normalization, more commonly used in data warehouse or reporting environments. Denormalization decreases granularity by reversing normalization, and otherwise." (Gavin Powell, "Beginning Database Design", 2006)

"Organization of data by minimizing joins between tables and storing redundant values in a single table to reduce query time." (Reed Jacobsen & Stacia Misner, "Microsoft SQL Server 2005 Analysis Services Step by Step", 2006)

"The process of adding planned redundancy to an already fully normalized data model." (Thomas Moore, "MCTS 70-431: Implementing and Maintaining Microsoft SQL Server 2005", 2006)

"Denormalization is the process of combining tables so that they are easier to query. Denormalization is opposite to normalization. Denormalization is done to improve query performance." (S. Sumathi & S. Esakkirajan, "Fundamentals of Relational Database Management Systems", 2007)

"The formal process of introducing redundancy back into the database design to improve performance." (Victor Isakov et al, "MCITP Administrator: Microsoft SQL Server 2005 Optimization and Maintenance (70-444) Study Guide", 2007)

"Denormalization is the process of extracting data from normalized tables in the relational model of a data warehouse." (Robert Nisbet et al, "Handbook of statistical analysis and data mining applications", 2009)

"The consolidation of database tables to increase performance in data retrieval (query), despite the potential loss of data integrity. Decisions on when to denormalize tables are based on cost/benefit analysis by the DBA." (Toby J Teorey, ", Database Modeling and Design 4th Ed", 2010)

"A process by which a table is changed from a higher level normal form to a lower level normal form. Usually done to increase processing speed. Potentially yields data anomalies." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management" 9th Ed, 2011)

"Undoing the effect of normalization; the process of putting one fact in numerous places in the database." (Craig S Mullins, "Database Administration", 2012)

"The intentional duplication of columns in multiple tables to increase data redundancy. Denormalization is sometimes used to improve performance." (Sybase, "Open Server Server-Library/C Reference Manual", 2019) 

DBMS: Transaction Log File (Definitions)

"A system table (syslogs) in which all changes to the database are recorded." (Karen Paulsell et al, "Sybase SQL Server: Performance and Tuning Guide", 1996)

"A reserved area of the database in which all changes to the database are recorded. The transaction log is stored in the Syslogs system table and is used by SQL Server during automatic recovery." (Patrick Dalton, "Microsoft SQL Server Black Book", 1997)

"A reserved area set aside for each database on the SQL Server that records all changes diat are made to die database. This enables SQL to recover a database if system problems are encountered." (Owen Williams, "MCSE TestPrep: SQL Server 6.5 Design and Implementation", 1998)

"A file or set of files containing a record of a database's transactions." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"A file or set of files containing a record of the modifications made in a database." (Thomas Moore, "EXAM CRAM™ 2: Designing and Implementing Databases with SQL Server 2000 Enterprise Edition", 2005)

"Transactions in SQL Server are written to the transaction log before they are written to the database. This log information is used primarily for database recovery in the event of a disaster." (Joseph L Jorden & Dandy Weyn, "MCTS Microsoft SQL Server 2005: Implementation and Maintenance Study Guide - Exam 70-431", 2006)

"A file containing a record of database changes." (S. Sumathi & S. Esakkirajan, "Fundamentals of Relational Database Management Systems", 2007)

"File that records transactional changes occurring in a database, providing a basis for updating a master file and establishing an audit trail." (S. Sumathi & S. Esakkirajan, "Fundamentals of Relational Database Management Systems", 2007)

"A file-system–based, internal database construct that records data and table modifications, making it possible to restore information to its previous state should the application roll back a transaction." (Robert D. Schneider and Darril Gibson, "Microsoft SQL Server 2008 All-In-One Desk Reference For Dummies", 2008)

"A file used by the database management system to record all database transactions. The log file is used for recovery of the database in case of failures." (Paulraj Ponniah, "Data Warehousing Fundamentals for IT Professionals", 2010)

"A feature used by the DBMS to keep track of all transaction operations that update the database. The information stored in this log is used by the DBMS for recovery purposes." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management 9th Ed", 2011)

"A collection of records describing the sequence of events that occur during DBMS execution to be used for database recovery in the event of a DBMS failure." (Craig S Mullins, "Database Administration: The Complete Guide to DBA Practices and Procedures" 2nd Ed., 2012)

"A file that records transactional changes occurring in a database, providing a basis for updating a master file and establishing an audit trail." (Microsoft, "SQL Server 2012 Glossary", 2012)

"A collection of records that sequentially describes the events that occur in a system. A record of events." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

"A set of primary and secondary log files consisting of log records that record all changes to a database. The database log is used to roll back changes for units of work that are not committed and to recover a database to a consistent state." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

DBMS: Entity Relationship Diagram (Definitions)

"A graphical method of showing the entities, relationships, and attributes in a data model." (Owen Williams, "MCSE TestPrep: SQL Server 6.5 Design and Implementation", 1998)

"Drawings of boxes and lines to communicate the relationship between tables. Both third normal form (3NF) and dimensional models can be represented as ER diagrams because both consist of joined relational tables. The key difference between the models is the degree of dimension normalization. A dimensional model is a second normal form (2NF) model." (Ralph Kimball & Margy Ross, "The Data Warehouse Toolkit" 2nd Ed., 2002)

"The ERD is a proven and reliable data-modeling approach with straightforward rules of construction. The normalization rules yield a stable, consistent data model that upholds the policies and rules of engagement established by the enterprise. The resulting database schema is the most efficient in terms of storage and data loading as well." (Claudia Imhoff et al, "Mastering Data Warehouse Design", 2003)

"A diagram that represents the structural contents (the fields) in tables for an entire schema, in a database. Additionally included are schematic representations of relationships between entities, represented by various types of relationships, plus primary and foreign keys." (Gavin Powell, "Beginning Database Design", 2006)

"A diagram (or graph) of entities and their relationships, and possibly the attributes of those entities." (Toby J Teorey, ", Database Modeling and Design" 4th Ed., 2010)

"A diagram that depicts an entity relationship model’s entities, attributes, and relations." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management" 9th Ed, 2011)

"The graphical diagram for an Entity Relationship data model. The underlying data model generally includes more semantics than is or can be represented in the view shown on the diagram, e.g., some business rules." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

 "An E/R diagram graphically depicts the entities and relationships of a data model." (Craig S Mullins, "Database Administration: The Complete Guide to DBA Practices and Procedures" 2nd Ed, 2012)

01 July 2009

DBMS: Normalization (Definitions)

"Normalization is the database design process of discarding repeating groups, minimizing redundancy, eliminating composite keys for partial dependency, and separating non-key attributes. Various levels of normalization and various rules or tests have been formalized for performing normalization." (Microsoft Corporation, "Microsoft SQL Server 7.0 Data Warehouse Training Kit", 2000)

"The process of transforming database designs into logical structures by following rules and principles of relational database theory. Different 'normal forms' exist, each further reducing both redundancy and the possibility of update anomalies. 'Third normal form' is a design in which all the attributes of each row 'depend on the key, the whole key, and nothing but the key'." (Bill Pribyl & Steven Feuerstein, "Learning Oracle PL/SQL", 2001)

"The process of designing a database so that its tables follow the rules specified by relational theory. In practice, this usually means that all database tables are in third normal form." (Peter Gulutzan & Trudy Pelzer, "SQL Performance Tuning", 2002)

"Normalization is a method for ensuring that the data model meets the objectives of accuracy, consistency, simplicity, nonredundancy, and stability. It is a physical database design technique that applies mathematical rules to the relational data model to identify and reduce insertion, updating, or deletion anomalies." (Claudia Imhoff et al, "Mastering Data Warehouse Design", 2003)

"A formal approach in data modeling that examines and validates attributes and their entities in the Logical data model. The purpose of data normalization is to ensure that each attribute belongs to the entity to which it has been assigned, that redundant storage of information is minimized, and that storage anomalies are eliminated." (Sharon Allen & Evan Terry, "Beginning Relational Data Modeling 2nd Ed.", 2005)

"Developed by Dr. E. F. Codd in 1970, database normalization is the process of simplifying data and database design to achieve maximum performance and simplicity. This process involves the removing of useless and redundant data." (Thomas Moore, "EXAM CRAM™ 2: Designing and Implementing Databases with SQL Server 2000 Enterprise Edition", 2005)

"A process by which a relational schema design is adjusted to reduce the possibility of storing data redundantly. As a schema is normalized, attributes that contain repeating values are moved into new tables and replaced by a foreign key. This process requires analyzing and understanding the dependencies among attributes and key columns. There are several degrees of normalization, which formally describe the extent to which redundancies have been removed. Third normal form (3NF) is widely accepted as the optimal relational design for a transaction system. A star schema design is often referred to as denormalized, although it is actually in second normal form." (Christopher Adamson, "Mastering Data Warehouse Aggregates", 2006)

"The organization of data to reduce redundancy by creating many linked tables so that a value is stored in only one place." (Reed Jacobsen & Stacia Misner, "Microsoft SQL Server 2005 Analysis Services Step by Step", 2006)

"The process of simplifying the structure of data. Normalization increases granularity and Granularity is the scope of a definition for any particular thing. The more granular a data model is, the easier it becomes to manage, up to a point, depending, of course, on the application of the database model." (Gavin Powell, "Beginning Database Design", 2006)

"A formal process of removing redundancy from a database design by separating it into children tables from the parent table." (Victor Isakov et al, "MCITP Administrator: Microsoft SQL Server 2005 Optimization and Maintenance (70-444) Study Guide", 2007)

"Logical design process in which data is separated into multiple, related tables. The process allows databases to perform optimally." (Sara Morganand & Tobias Thernstrom , "MCITP Self-Paced Training Kit : Designing and Optimizing Data Access by Using Microsoft SQL Server 2005 - Exam 70-442", 2007)

"The design process for generating entity specifications to minimize both data redundancy and update anomalies." (S. Sumathi & S. Esakkirajan, "Fundamentals of Relational Database Management Systems", 2007)

"A series of database design recommendations that dictate how information should be dispersed among tables as well as how these tables should relate." (Robert D. Schneider and Darril Gibson, "Microsoft SQL Server 2008 All-In-One Desk Reference For Dummies", 2008)

"The process of transforming the database's structure to minimize the changes of certain kinds of data anomalies." (Rod Stephens, "Beginning Database Design Solutions", 2008)

"The process of designing relations to adhere to increasingly stringent sets of rules to avoid problems with poor database design." (Jan L Harrington, "Relational Database Design and Implementation" 3rd Ed., 2009)

"The process of breaking up a table into smaller tables to eliminate problems with unwanted loss of data (the egregious side effects of losing data integrity) from the deletion of records and inefficiencies associated with multiple data updates." (Toby J Teorey, ", Database Modeling and Design" 4th Ed., 2010)

"The process, originally articulated by Dr. E. F. Codd in his relational theory, for organizing data to reduce redundancy to the minimum possible. It involves guaranteeing that each attribute in a 'relation' (table or entity class) is truly an attribute of that relation and none other. The process involves organizing data to follow the constraints of at least first normal form, second normal form, and third normal form. Additional value is found in Boyce-Codd normal form, fourth normal form, and fifth normal form." (David C Hay, "Data Model Patterns: A Metadata Map", 2010)

"A process that assigns attributes to entities in such a way that data redundancies are reduced or eliminated." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management 9th Ed", 2011)

"The process of organizing data to minimize redundancy and remove ambiguity. In simple terms, normalization is the process of identifying the one best place each fact belongs." (Craig S Mullins, "Database Administration", 2012)

"The process of organizing data at its detailed level into according to its existence criteria" (Daniel Linstedt & W H Inmon, "Data Architecture: A Primer for the Data Scientist", 2014)

"The process of restructuring a data model by reducing its relations to their simplest forms. It is a key step in the task of building a logical relational database design. Normalization helps avoid redundancies and inconsistencies in data. An entity is normalized if it meets a set of constraints for a particular normal form (first normal form, second normal form, and so on)." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

DBMS: Transactions (Definitions)

"A mechanism for ensuring that a set of actions is treated as a single unit of work." (Karen Paulsell et al, "Sybase SQL Server: Performance and Tuning Guide", 1996)

"A series of SQL statements that constitute an atomic unit of work: either all are committed as a unit or they are all rolled back as a unit. A transaction begins with the first statement since the last transaction end and finishes with a transaction end (either COMMIT or ROLLBACK) statement." (Peter Gulutzan & Trudy Pelzer, "SQL Performance Tuning", 2002)

"A group of database operations combined into a logical unit of work that is either wholly committed or rolled back. A transaction is atomic, consistent, isolated, and durable." (Anthony Sequeira & Brian Alderman, "The SQL Server 2000 Book", 2003)

"A logical unit of work consisting of one or more SQL statements that must all succeed or all fail to keep the database in a logically consistent state. A transfer of funds from a bank account is a logical transaction, in that both the withdrawal from one account and the deposit to another account must succeed for the transaction to succeed." (Bob Bryla, "Oracle Database Foundations", 2004)

"A series of database operations that should be treated as a single atomic operation so either they all occur or none of them occur." (Rod Stephens, "Beginning Database Design Solutions", 2008)

"One or more SQL statements that make up a unit of work performed against the database. Either all the statements in a transaction are committed as a unit or all the statements are rolled back as a unit." (John Goodson & Robert A Steward, "The Data Access Handbook", 2009)

"A group of database operations combined into a logical unit of work that is either wholly committed or rolled back. A transaction is atomic, consistent, isolated, and durable." (Jim Joseph, "Microsoft SQL Server 2008 Reporting Services Unleashed", 2009)

"An atomic unit of work with respect to recovery and consistency." (Craig S Mullins, "Database Administration: The Complete Guide to DBA Practices and Procedures" 2nd Ed, 2012)

"An atomic unit of work with respect to recovery and consistency." (Craig S Mullins, "Database Administration", 2012)

"Each individual purchase. Each time customers swipe a card, shell out cash, or press the purchase confirmation button online, a transaction takes place. This data often is referred to as transaction log (T-log) data." (Brittany Bullard, "Style and Statistics", 2016)

DBMS: Entity-Relationship Model (Definitions)

"A common way to organize, think about, or discuss the elements of the 'real world' that a database design will represent, by dividing them into entities and relationships." (Bill Pribyl & Steven Feuerstein, "Learning Oracle PL/SQL", 2001)

"A type of conceptual data model that represents structured data in terms of entities and relationships. An entity-relationship diagram can be used to represent information objects and their relationships visually. Because the constructs used in the entity-relationship model can easily be transformed into relational tables, this type of model is often used in database design." (J P Getty Trust, "Introduction to Metadata" 2nd Ed., 2008)

"A data model that is used to represent data in its purest form and to define relationships between different entities." (Laura Reeves, "A Manager's Guide to Data Warehousing", 2009)

"A technique for representing entity relationships that is independent of any specific data model and any specific software." (Jan L Harrington, "Relational Database Design and Implementation" 3rd Ed., 2009)

"A conceptual data model involving entities, relationships among entities, and attributes of those entities." (Toby J Teorey, ", Database Modeling and Design" 4th Ed., 2010)

"A data model developed by P. Chen in 1975. It describes relationships (1:1, 1:M, and M:N) among entities at the conceptual level with the help of ER diagrams." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management" 9th Ed., 2011)

"An abstract and conceptual representation of data. Entity-relationship model consists of a set of entities, characterized by attributes and linked by relationships." (International Qualifications Board for Business Analysis, "Standard glossary of terms used in Software Engineering", 2011)

"1.Generally, a record-based data modeling scheme that focuses on entities and relationships in the presentation of data model diagrams, thus suppressing the display of attributes. A true ER model allows multi-valued data items and repeating groups of items (nested relations, thus violating first normal form), retains M:N relationships, attributed relationships, subtypes/supertypes, ternary and higher-order relationships, none of which can be represented directly in a relational data model. A true ER model generally excludes (defers) the representation of entity identifiers and foreign keys. Originally proposed and named by Peter Chen (1976). 2.In relational modeling, the most popular style of data model, defining entities and the business relationships between the entities. Some more detailed models include also some of the attributes of these entities, usually those involved in the relationships as keys." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"A data management approach that graphically represents relationships between data. This allows developers to create new relationships between data sources without complex programming." (Marcia Kaufman et al, "Big Data For Dummies", 2013)

"A logical view of data within a system, representing the entities in the system as well as relationships among the entities, attributes of the entities, and attributes of the relationships." (IEEE 610.5-1990)

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.