Showing posts with label business rules. Show all posts
Showing posts with label business rules. Show all posts

10 April 2024

🧭Business Intelligence: Perspectives (Part XI: Ways of Thinking about Data)

Business Intelligence Series

One can observe sometimes the tendency of data professionals to move from a business problem directly to data and data modeling without trying to understand the processes behind the data. One could say that the behavior is driven by the eagerness of exploring the data, though even later there are seldom questions considered about the processes themselves. One can argue that maybe the processes are self-explanatory, though that’s seldom the case. 

Conversely, looking at the datasets available on the web, usually there’s a fact table and the associated dimensions, the data describing only one process. It’s natural to presume that there are data professionals who don’t think much about, or better said in terms of processes. A similar big jump can be observed in blog posts on dashboards and/or reports, bloggers moving from the data directly to the data model. 

In the world of complex systems like Enterprise Resource Planning (ERP) systems thinking in terms of processes is mandatory because a fact table can hold the data for different processes, while processes can span over multiple fact-like tables, and have thus multiple levels of detail. Moreover, processes are broken down into sub-processes and procedures that have a counterpart in the data as well. 

Moreover, within a process there can be multiple perspectives that are usually module or role dependent. A perspective is a role’s orientation to the word for which the data belongs to, and it’s slightly different from what the data professional considers as view, the perspective being a projection over a set of processes within the data, while a view is a projection of the perspectives into the data structure. 

For example, considering the order-to-cash process there are several sub-processes like order fulfillment, invoicing, and payment collection, though there can be several other processes involved like credit management or production and manufacturing. Creating, respectively updating, or canceling an order can be examples of procedures. 

The sales representative, the shop worker and the accountant will have different perspectives projected into the data, focusing on the projection of the data on the modules they work with. Thinking in terms of modules is probably the easiest way to identify the boundaries of the perspectives, though the rules are occasionally more complex than this.

When defining and/or attempting to understand a problem it’s important to understand which perspective needs to be considered. For example, the sales volume can be projected based on Sales orders or on invoiced Sales orders, respectively on the General ledger postings, and the three views can result in different numbers. Moreover, there are partitions within these perspectives based on business rules that determine what to include or exclude from the logic. 

One can define a business rule as a set of conditional logic that constraints some part of the data in the data structures by specifying what is allowed or not, though usually we refer to a special type called selection business rule that determines what data are selected (e.g. open Purchase orders, Products with Inventory, etc.). However, when building the data model we need to consider business rules as well, though we might need to check whether they are enforced as well. 

Moreover, it’s useful to think also in terms of (data) entities and sub-entities, in which the data entity is an abstraction from the physical implementation of database tables. A data entity encapsulates (hides internal details) a business concept and/or perspective into an abstraction (simplified representation) that makes development, integration, and data processing easier. In certain systems like Dynamics 365 is important to think at this level because data entities can simplify data modelling considerably.

Previous Post <<||>>  Next Post

30 December 2020

🧊Data Warehousing: ETL (Part V: The Transform Subprocess)

Data Warehousing

As part of the ETL process, the Transform subprocess is responsible for bridging the gap between source and destination by leveraging SQL or the rich set of (data) transformations available in ETL tools, either to enable the implicit or explicit conversion between source and destination data types, or to transform the data as needed. 

Transformations act on data as operators, the challenge being to transform the data in the smallest number of steps in the most efficient way. Some of the transformations available in the ETL tools (e.g. conversions, sorting, sampling, joins, lookups, aggregation, pivoting, unpivoting) can be replaced by SQL-based logic. One can easily prepare the data directly in the extraction query, taking thus advantage of the power provided by the database engines. Moreover, the logic can be encapsulated in views or other objects and called as required by the extraction logic when the source database allows it. This approach allows maintaining the logic independently of the ETL packages.

Unfortunately, SQL can replace the transformations that address sequential logic and not workflow-related logic (e.g. conditional splitd, merges, multicasts, slowly changing dimensions) or logic that includes certain computational complexity (e.g. fuzzy groupings or lookups). Such gaps need to be filled by the ETL tools via the built-in transformations, by allowing developers to build custom logic or simple use COTS solutions, when they prove capable of filling the gap. 

Copying the data 1:1 at table or entity-level from the source system(s) involves in theory the simplest transformations, transformations revolving mainly around conversions between data types. The casual troublemakers are the numeric and date values, which can be found in different formats or precisions in the various environments. As this can apply to the ETL environment itself, it’s important to consider environment-agnostic data types when possible (e.g. strings). 

Other sources for concerns are the user-defined data types which don’t have equivalents between the systems, needing thus additional transformations for further handing, respectively the invalid values which need to be handled accordingly. Besides the data from the source system(s) and the derived values, upon case one needs to consider the parameter-based or hardcoded metadata created in the process. 

Independently of the purpose of the ETL packages it is usually required to document the data flow associated with them and the rules applied in transformations in what is known as a mapping document. Such a document needs to be understandable by the business, as it can serve for Data Management, projects, or other purposes.  Even if it’s almost impossible to document everything, at minimum needs to be provided the source and destination tables, the attributes considered in the mappings, respectively the most important rules the business should be aware of. Otherwise, the technical people can always turn back to the SQL queries, when needed. 

Some sources consider each non-trivial transformation as a business rule. Even if the rules used in transformations constrain the (business) data, not each rule is relevant for the business to the degree that it constrains some part of the business.

Data Migrations involve transformations between (database) schemas. Therefore, the logic requested to move the data could be handled in theory with a few well-designed packages, though there are considerations like logic complexity, transparency, flexibility, performance or auditability which could be better handled by using other techniques (e.g. saving the data in intermediary tables, breaking down the logic in several steps). Such considerations can apply also to simple ETL packages. Therefore, it’s important to recognize such scenarios, weight the choices and choose what fits best. However, unless one knows what one’s doing, it’s recommended to use the methods one knows best. 

Previous Post <<||>> Next Post

27 December 2020

🧊Data Warehousing: ETL (Part III: The Extract Subprocess)

 

Data Warehousing
Data Warehousing Series

As part of the ETL process with applicability to Data Warehousing, Data Migrations, Data Integrations or similar scenarios the extraction subprocess is responsible for preparing and implementing the logic required to extract the data from the various source systems at the required level of detail. The extraction is done typically based on SQL queries as long one deals with relational databases or any OLEDB or ODBC-based data repositories including flat or MS Office files.

One can consider the preparation of the extraction logic as separate design subprocess of the targeted solution. Even if high-level design decisions are considered at the respective level, the low-level design needs to be considered at ETL package level. As part of the process are identified the source of the data in terms of system, tables and attributes to be imported, as well the joins, business and transformation rules that need to be applied on the data. This can involve reengineering the logic from the source system(s) as well data profiling, discovery or exploration activities.

A common practice is to copy the source tables 1:1 into the solution, eventually by considering only the needed attributes to minimize the necessary space, loading time and content’s complexity, even if this would add more effort into the design phase to identify only the needed attributes. If further attributes are identified at a later stage, the packages need to be modified accordingly. If the data volume or the number of unnecessary attributes is neglectable, copying the table 1:1 could prove to be the best strategy.

A second approach is to model within the extraction the (business) entity as designed within the source system. For example, the entity could be split over multiple tables from design or other purposes. Thus, the extraction query will attempt modeling the entity. This approach reduces to some degree the number of tables from the targeted solution, as well the number of ETL packages involved, while providing a clear depiction of the entities involved.

A third approach is to extract the data as needed by the target system, eventually as a mix between master and transaction data, fact which could easily lead to data redundancy with different timeliness and all the consequences resulting from this. This approach is usually met in solutions which require fast data availability in the detriment of design.

Unfortunately, there can be design constraints or choice considerations that could lead to a mix between these approaches. If the impact caused by the mix between the first two approaches is minimal, the third approach can cause more challenges, though it might be a small price to pay as long the considered data are disconnected from other data.

To reduce the redundancy of data, it’s recommended to consider as goal creating a unique source of facts, which can be obtained by minimizing as much as possible the overlaps between tables, respectively entities. Ideally there should be no overlaps. On the other sides the overlaps can be acceptable when the same data are available in more systems and the solution requires all the data to be available.

If the above approaches consider the vertical partitioning of the data, there can be also horizontal partitioning needs especially when a subset of the data is needed or when is needed to partition the data based on a set of values. In addition, one might be forced to include also transformation rules directly into the extraction logic, for example to handle conversion issues or minimize certain design overhead early in the process. In practice it makes sense to link such choices to business rules and document them accordingly.

Previous Post <<||>> Next Post

07 November 2020

⛁DBMS: Event Streaming Databases (More of a Kafka’s Story)

Database Management

Event streaming architectures are architectures in which data are generated by different sources, and then processed, stored, analyzed, and acted upon in real-time by the different applications tapped into the data streams. An event streaming database is then a database that assures that its data are continuously up-to-date, providing specific functionality like management of connectors, materialized views and running queries on data-in-motion (rather than on static data). 

Reading about this type of technologies one can easily start fantasizing about the Web as a database in which intelligent agents can process streams of data in real-time, in which knowledge is derived and propagated over the networks in an infinitely and ever-growing flow in which the limits are hardly perceptible, in which the agents act as a mind disconnected in the end from the human intent. One is stroke by the fusing elements of realism and the fantastic aspects, more like in a Kafka’s story in which the metamorphosis of the technologies and social aspects can easily lead to absurd implications.

The link to Kafka was somehow suggested by Apache Kafka, an open-source distributed event streaming platform, which seems to lead the trends within this new-developing market. Kafka provides database functionality and guarantees the ACID (atomicity, concurrency, isolation, durability) properties of transactions while tapping into data streams. 

Data streaming is an appealing concept though it has some important challenges like data overload or over-flooding, the complexity derived from building specific (business) and integrity rules for processing the data, respectively for keeping data consistency and truth within the ever-growing and ever-changing flows. 

Data overload or over-flooding occurs when applications are not able to keep the pace with the volume of data or events fired with each change. Imagine the raindrops falling on a wide surface in which each millimeter or micrometer has its own rules for handling the raindrops and this at large scale. The raindrops must infiltrate into the surface to be processed and find their way to the beneath water flows, aggregating up to streams that could nurture seas or even oceans. Same metaphor can be applied to the data events, in which the data pervade applications accumulating in streams processed by engines to derive value. However heavy rains can easily lead to floods in which water aggregates at the surface. 

Business applications rely on predefined paths in which the flow of data is tidily linked to specific actions found themselves in processual sequences that reflect the material or cash flow. Any variation in the data flow from expectations will lead to inefficiencies and ultimately to chaos. Some benefit might be derived from data integrations between the business applications, however applications must be designed for this purpose and handle extreme behaviors like over-flooding. 

Data streams are maybe ideal for social media networks in which one broadcasts data through the networks and any consumer that can tap to the network can further use the respective data. We can see however the problems of nowadays social media – data, better said information, flow through the networks being changed as fit for purposes that can easily diverge from the initial intent. Moreover, information gets denatured, misused, overused to the degree that it overloads the networks, being more and more difficult to distinguish between reliable and non-reliable information. If common sense helps in the process of handling such information, not the same can be said about machines or applications. 

It will be challenging to deal with the vastness, vagueness, uncertainty, inconsistency, and deceit of the networks of the future, however data streaming more likely will have a future as long it can address such issues by design. 


04 March 2018

🔬Data Science: Fuzzy Rule (Definitions)

"A conditional of the form IF X IS A, THEN Y IS B where A and B are fuzzy sets. In mathematical terms a rule is a relation between fuzzy sets. Each rule defines a fuzzy patch (the product A x B) in the system 'state space'. The wider the fuzzy sets A and B, the wider and more uncertain the fuzzy patch. Fuzzy rules are the knowledge-building blocks in a fuzzy system. In mathematical terms each fuzzy rule acts as an associative memory that associates the fuzzy response B with the fuzzy stimulus A." (Guido Deboeck & Teuvo Kohonen (Eds), "Visual Explorations in Finance with Self-Organizing Maps" 2nd Ed., 2000)

"In general, in rule-based systems, rules look something like: If A1 and A2 and … An then C1 and C2 and … Cm; where the Ai are the antecedents (conditions) on the left hand side (LHS) of the rule and the Cj are the consequents (conclusions) on the right hand side (RHS) of the rule. In this format, if all of the antecedents on the LHS of the rule are true then the rule will fire and the consequents will be asserted / executed. With Fuzzy rules both antecedents and conclusions can be of fuzzy nature." (Juan R González et al, Nature-Inspired Cooperative Strategies for Optimization, 2008)

"Fuzzy If-Then or fuzzy conditional statements are expressions of the form 'If A Then B', where A and B are labels of fuzzy sets characterised by appropriate membership functions. Due to their concise form, fuzzy If-Then rules are often employed to capture the imprecise modes of reasoning that play an essential role in the human ability to make decision in an environment of uncertainty and imprecision. The set of If-Then rules relate to a fuzzy logic system that are stored together is called a Fuzzy Rule Base." (Masoud Mohammadian, Supervised Learning of Fuzzy Logic Systems, 2009)

23 February 2017

⛏️Data Management: Data Integration (Definitions)

"The process of coherently using data from across platforms. applications or business units. Data integration ensures that data from different sources is merged allowing silos of data to be combined." (Tony Fisher, "The Data Asset", 2009)

"The planned and controlled:
a) merge using some form of reference,
b) transformation using a set of business rules, and
c) flow of data from a source to a target, for operational and/or analytical use. Data needs to be accessed and extracted, moved, validated and cleansed, standardized, transformed, and loaded. (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"The collection of data from various sources with the same significance into one uniform record. This data may be physically integrated, for example, into a data warehouse or virtually, meaning that the data will remain in the source systems, however will be accessed using a uniform view." (Boris Otto & Hubert Österle, "Corporate Data Quality", 2015)

"Data integration comprises the activities, techniques, and tools required to consolidate and harmonize data from different (multiple) sources into a unified view. The processes of extract, transform, and load (ETL) are part of this discipline." (Piethein Strengholt, "Data Management at Scale", 2020)

"Pulling together and reconciling dispersed data for analytic purposes that organizations have maintained in multiple, heterogeneous systems. Data needs to be accessed and extracted, moved and loaded, validated and cleaned, and standardized and transformed." (Information Management)

"The combination of technical and business processes used to combine data from disparate sources into meaningful insights." (Solutions Review)

"The process of retrieving and combining data from different sources into a unified set for users, organizations, and applications." (MuleSoft) 

"Data integration is the practice of consolidating data from disparate sources into a single dataset with the ultimate goal of providing users with consistent access and delivery of data across the spectrum of subjects and structure types, and to meet the information needs of all applications and business processes." (OmiSci) [source]

"Data integration is the process of combining data from multiple source systems to create unified sets of information for both operational and analytical uses." (Techtarget)

"Data integration is the process of bringing data from disparate sources together to provide users with a unified view. The premise of data integration is to make data more freely available and easier to consume and process by systems and users." (Tibco) [source]

"Data integration is the process of retrieving and combining data from different sources into a unified set of data. Data integration can be used to combine data for users, organizations, and applications." (kloudless)

"Data integration is the process of taking data from multiple disparate sources and collating it in a single location, such as a data warehouse. Once integrated, data can then be used for detailed analytics or to power other enterprise applications." (Xplenty) [source]

"Data integration is the process used to combine data from disparate sources into a unified view that can provide valuable and actionable information." (snowflake) [source]

"Data integration refers to the technical and business processes used to combine data from multiple sources to provide a unified, single view of the data." (OmiSci) [source]

"The discipline of data integration comprises the practices, architectural techniques and tools for achieving the consistent access and delivery of data across the spectrum of data subject areas and data structure types in the enterprise to meet the data consumption requirements of all applications and business processes." (Gartner)

18 January 2017

⛏️Data Management: Business Rules (Definitions)

"A statement expressing a policy or condition that governs business actions and establishes data integrity guidelines." (Larry P English, "Improving Data Warehouse and Business Information Quality", 1999)

"An organizational standard operating procedure that requires that certain policies be followed to ensure that a business is run correctly. Business rules ensure that the database maintains its accuracy with business policies."  (Microsoft Corporation, "Microsoft SQL Server 7.0 System Administration Training Kit", 1999)

"[…] a business rule is a compact statement about an aspect of a business. The rule can be expressed in terms that can be directly related to the business, using simple, unambiguous language that's accessible to all interested parties: business owner, business analyst, technical architect, and so on." (Tony Morgan, "Business Rules and Information Systems", 2002) 

"the set of conditions that govern a business event so that it occurs in a way that is acceptable to the business." (Barbara von Halle, 2002)

"The logical rules that are used to run a business." (Anthony Sequeira & Brian Alderman, "The SQL Server 2000 Book", 2003)

"A set of methods or guidelines associated with a company’s data and business processing that reflect its methods of conducting business operations." (Jill Dyché & Evan Levy, "Customer Data Integration" , 2006)

"A statement that defines or constrains some aspect of the business. It is intended to assert business structure or to control or influence the behavior of the business." (Alex Berson & Lawrence Dubov, "Master Data Management and Customer Data Integration for a Global Enterprise", 2007)

"Business-specific rule that constrains the data." (Rod Stephens, "Beginning Database Design Solutions", 2008)

"The defined operations and constraints that help organizations create a data environment that promotes efficient operations and decision making. An example of a business rule for a hospital would be that no male patient can be marked pregnant. Organizations typically have thousands of business rules, but not all facets of the same organizations follow all of them, and, in some cases, the rules can conflict." (Tony Fisher, "The Data Asset", 2009)

"Either a set of conditions, a directive, or an 'element of guidance'. A constraint on a business’s behavior. There is not yet an industry standard definition of business rule although authors seem to be converging." (David C Hay, "Data Model Patterns: A Metadata Map", 2010)

"A directive, intended to govern, guide or influence business behavior, in support of a business policy that has been formulated in response to an opportunity, threat, strength or weakness." (The Business Rules Group, "The Business Motivation Model: Business Governance in a Volatile World", 2005)

"An element of guidance that introduces an obligation or necessity, [and] that is under business jurisdiction" (Business Rules Team, 'Semantics of Business Vocabulary and Business Rules", 2005)

"The logical rules that are used to run a business" (Microsoft)

05 October 2012

🗄️Data Management: Business Rules – An Introduction

Data Management
Data Management Series

    "Business rules" seems to be a recurring theme these days – developers, DBAs, architects, business analysts, IT and non-IT professionals talk about the necessity to enforce them in data and semantic models, information systems, processes, departments or whole organizations. They seem to affect the important layers of an organization. In fact the same business rule can affect multiple levels either directly, or indirectly through the hierarchical or networked structure of causality it belongs to. When considered all the business rules, the overall picture can become very complex. The fact that there are multiple levels of interconnected layers, with applications and implications at macro or micro level, makes the complexity to fight back because in order to solve business-specific problems often you have to go at least one level above the level where the problems were defined, or to simplify the problems to a level of detail that allows to tackled.

    The Business Rules Group defines a business rule as "a statement that defines or constrains some aspect of the business" [1], definition which seems to be closer to the vocabulary of IT people. Ronald G. Ross, in his book Principles of the Business Rule Approach, defines it as "a directive intended to influence or guide business behavior" [2], definition closer to the vocabulary of HR people. In fact the two definitions are kind of similar, highlighting the constrictor or guiding role of business rules. They raise also an important question – can everything that is catalogued as constraint or guidelines considered as a business rule? In theory yes, practically there are constraints and guidelines that have different impact on the business, so depending on context they need to be considered or not. What to consider is itself an art, which adds up to the art of problem solving.

    Besides identification, neither the definition nor management of business rules seems easy tasks. R.G. Ross considers that business rules need to be written and made explicit, expressed in plain language, independent of procedures and workflows, built on facts, motivated by identifiable and important business factors, accessible to authorized parties, specific, single sourced, managed, specified by those people who have relevant knowledge, and they should guide or influence behavior in desired ways [2]. This summarizes the various aspects that need to be considered when defining and managing business rules. Many organization seems to be challenged by this, and it can be challenging when lacks business management maturity.

    Many business rules exist already in functional and technical specifications written for the various software products built on request, in documentation of purchases software, in processes, procedures, standards, internal defined and external enforced policies, in the daily activities and knowledge exchanged or hold by workers. Sure, the formulations existing in such resources need to be enhanced and aggregated in order to be brought at the status of business rule. And here comes the difficulty, as iterative work needs to be performed in order to bring them to the level indicated by R.G Ross. For sure Ross’ specifications are idealistic, though they offer a “framework” for defining business rules. In what concerns their management, there is a lot to be done within an organization, as this aspect needs to be integrated with other activities and strategies existing in an organization.

    Often, when an important initiative, better said project, starts within an organization, then is felt in particular the lack of up-front defined and understood business rules. Such events trigger the identification and elicitation of business rules; they are addressed in documentation and remain buried in there. It is also true that it’s difficult to build a business case for further processing of business rules. An argument could be the costs associated from decisional mistakes taken by not knowing the existing rules, though that’s something difficult to quantify and make visible in an organization. In the end, most probably an organization will recognize the value of business rules when it reached a certain level of maturity.

References:
[1] Business Rules Group (2000) Defining Business Rules - What Are They Really? [Online] Available from: http://businessrulesgroup.org/first_paper/BRG-whatisBR_3ed.pdf
[2] Ronald G. Ross (2003) Principles of the Business Rule Approach. Addison Wesley. ISBN: 0-201-78893-4.

09 March 2009

🛢DBMS: Trigger (Definitions)

"A special form of stored procedure that goes into effect when a user gives a change command such as insert, delete, or update to a specified table or column. Triggers are often used to enforce referential integrity." (Karen Paulsell et al, "Sybase SQL Server: Performance and Tuning Guide", 1996)

"A special form of stored procedure that goes into effect when data within a table is modified. Triggers are often created to enforce integrity or consistency among logically related data in different tables." (Patrick Dalton, "Microsoft SQL Server Black Book", 1997)

"A special type of stored procedure that is set off by actions taken on a table. Triggers allow for complex relationships between tables and complex business rules to be checked automatically." (Owen Williams, "MCSE TestPrep: SQL Server 6.5 Design and Implementation", 1998)

"A stored procedure that executes automatically when data in a specified table is modified. Triggers are often created to enforce referential integrity or consistency among logically related data in different tables." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"Code stored in the database that executes automatically when certain events occur. Traditionally associated only with table write events such as INSERT, UPDATE, or DELETE, newer versions of Oracle provide the ability to define triggers on views and on other system events such as logon, logoff, and system error." (Bill Pribyl & Steven Feuerstein, "Learning Oracle PL/SQL", 2001)

"A stored procedure that executes when data in a specified table is modified. Triggers are often created to enforce referential integrity or consistency among logically related data in different tables." (Anthony Sequeira & Brian Alderman, "The SQL Server 2000 Book", 2003)

"A trigger is a stored procedure that is fired when data is modified from a table using any of the three modification statements: DELETE, INSERT, or UPDATE. FOR and AFTER are synonymous, and are usually implied when referring to triggers, rather than INSTEAD OF triggers. Triggers are often created to enforce referential integrity or consistency among logically related data in different tables." (Thomas Moore, "EXAM CRAM™ 2: Designing and Implementing Databases with SQL Server 2000 Enterprise Edition", 2005)

"A chunk of code that executes when a specified event occurs, usually before or after an INSERT, UPDATE, or DELETE command." (Gavin Powell, "Beginning Database Design", 2006)

"A database method that is automatically invoked as the result of Data Manipulation Language (DML) activity within a persistence mechanism." (Pramod J Sadalage & Scott W Ambler, "Refactoring Databases: Evolutionary Database Design", 2006)

"A stored procedure that is fired when data is modified from a table using any of the three modification statements DELETE, INSERT, or UPDATE. FOR and AFTER are synonymous and are usually implied when referring to triggers rather than INSTEAD OF triggers. Triggers are often created to enforce referential integrity or consistency among logically related data in different tables." (Thomas Moore, "MCTS 70-431: Implementing and Maintaining Microsoft SQL Server 2005", 2006)

"A stored procedure that executes when certain conditions occurs such as when a record is created, modified, or deleted. Triggers can perform special actions such as creating other records or validating changes." (Rod Stephens, "Beginning Database Design Solutions", 2008)

"A type of stored procedure that fires in response to action on a table. DML triggers are associated with INSERT, UPDATE, and DELETE statements. DDL triggers are associated with CREATE, ALTER, and DROP statements." (Darril Gibson, "MCITP SQL Server 2005 Database Developer All-in-One Exam Guide", 2008)

"Stored in, and managed by, your database server, this software is executed when a certain event occurs. These events can range from information creation or modification to structural changes to your database. When the event occurs, the trigger is executed, causing a pre-determined set of actions to take place. These actions can encompass data validation, alerts, warnings, and other administrative operations. Triggers can invoke other triggers and stored procedures." (Robert D. Schneider and Darril Gibson, "Microsoft SQL Server 2008 All-In-One Desk Reference For Dummies", 2008)

"A stored procedure that executes in response to a Data Manipulation Language (DML) or Data Definition Language (DDL) event." (Jim Joseph, "Microsoft SQL Server 2008 Reporting Services Unleashed", 2009)

"A SQL program module that is executed when a specific data modification activity occurs. Triggers are stored in the database they manipulate." (Jan L Harrington, "SQL Clearly Explained" 3rd Ed., 2010)

"A stored procedure that can be triggered and executed automatically when a database operation such as insert, update, or delete takes place." (Paulraj Ponniah, "Data Warehousing Fundamentals for IT Professionals", 2010)

"A procedural SQL code that is automatically invoked by the relational database management system upon the occurrence of a data manipulation event." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management" 9th Ed., 2011)

"A software routine guaranteed to execute when an event occurs. Often a trigger will monitor changes to data values. A trigger includes a monitoring procedure, a set or range of values to check data integrity, and one or more procedures invoked in response, which may update other data or fulfill a data subscription." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"An event that causes a handler to be run." (Jon Orwant et al, "Programming Perl, 4th Ed.", 2012)

"An event-driven specialized procedure that is attached to database tables; typically implemented to support data integrity requirements." (Craig S Mullins, "Database Administration", 2012)

"A database object that is associated with a single base table or view and that defines a rule. The rule consists of a set of SQL statements that runs when an insert, update, or delete database operation occurs on the associated base table or view." (IBM, "Informix Servers 12.1", 2014)

"A database object that is associated with a single base table or view and that defines a rule. The rule consists of a set of SQL statements that runs when an insert, update, or delete database operation occurs on the associated base table or view." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

 "A PL/SQL or Java procedure that fires when a table or view is modified or when specific user or database actions occur. Procedures are explicitly run, whereas triggers are implicitly run." (Oracle, "Oracle Database Concepts")

"A stored procedure that executes in response to a data manipulation language (DML) or data definition language (DDL) event." (Microsoft Technet,)

14 February 2007

🌁Software Engineering: Validation (Definitions)

"An independent test process whereby the performance of the neural network is tested against the acceptance requirements." (Joseph P Bigus, "Data Mining with Neural Networks: Solving Business Problems from Application Development to Decision Support", 1996)

"Confirmation that the product, as provided (or as it will be provided), will fulfill its intended use. In other words, validation ensures that 'you built the right thing'." (Sandy Shrum et al, "CMMI®: Guidelines for Process Integration and Product Improvement", 2003)

"Confirmation or corroboration of something, such as a business need or an identified opinion or recommendation." (Teri Lund & Susan Barksdale, "10 Steps to Successful Strategic Planning", 2006)

"the process of checking that a system meets the user needs." (Bruce P Douglass, "Real-Time Agility: The Harmony/ESW Method for Real-Time and Embedded Systems Development", 2009)

"The assurance that a product, service, or system meets the needs of the customer and other identified stakeholders. It often involves acceptance and suitability with external customers. Contrast with verification." (Cynthia Stackpole, "PMP® Certification All-in-One For Dummies®", 2011)

"Testing if a development result fulfills the individual requirements for a specific use." (Tilo Linz et al, "Software Testing Foundations" 4th Ed., 2014)

"Determines if the product provides the necessary solution for the intended real-world problem." (Adam Gordon, "Official (ISC)2 Guide to the CISSP CBK" 4th Ed., 2015)

"Validation is the process of verifying that a document or data structure conforms with its schema or schemas." (Robert J Glushko, "The Discipline of Organizing: Professional Edition" 4th Ed, 2016)

"The assurance that a product, service, or result meets the needs of the customer and other identified stakeholders." (Project Management Institute, "A Guide to the Project Management Body of Knowledge (PMBOK® Guide )", 2017)

"The assurance that a product, service, or system meets the needs of the customer and other identified stakeholders. It often involves acceptance and suitability with external customers." (Jeffrey K Pinto, "Project Management: Achieving Competitive Advantage" 5th Ed., 2018)

 "activity that ensures a new or changed service, process, plan or other deliverable, meets the needs of the business." (ITIL)

"Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled" [ISO 9000]

"Confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled." (NIST SP 800-160)

"Confirmation (through the provision of strong, sound, objective evidence) that requirements for a specific intended use or application have been fulfilled." (NIST SP 800-161)

"Confirmation (through the provision of strong, sound, objective evidence) that requirements for a specific intended use or application have been fulfilled (e.g., a trustworthy credential has been presented, or data or information has been formatted in accordance with a defined set of rules, or a specific process has demonstrated that an entity under consideration meets, in all respects, its defined attributes or requirements)." (CNSSI 4009-2015)

"The process of determining that an object or process is acceptable according to a pre-defined set of tests and the results of those tests." (NIST SP 800-152)

"The process of demonstrating that the system under consideration meets in all respects the specification of that system." (INCITS/M1-040211)

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.