07 July 2019

🧱IT: Gateway (Definitions)

"A network software product that allows computers or networks running dissimilar protocols to communicate, providing transparent access to a variety of foreign database management systems (DBMSs). A gateway moves specific database connectivity and conversion processing from individual client computers to a single server computer. Communication is enabled by translating up one protocol stack and down the other. Gateways usually operate at the session layer." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"Connectivity software that allows two or more computer systems with different network architectures to communicate." (Sybase, "Glossary", 2005)

"A generic term referring to a computer system that routes data or merges two dissimilar services together." (Paulraj Ponniah, "Data Warehousing Fundamentals for IT Professionals", 2010)

"A software product that allows SQL-based applications to access relational and non-relational data sources." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"An entrance point that allows users to connect from one network to another." (Linda Volonino & Efraim Turban, "Information Technology for Management" 8th Ed., 2011)

[database gateway:] "Software required to allow clients to access data stored on database servers over a network connection." (Craig S Mullins, "Database Administration: The Complete Guide to DBA Practices and Procedures" 2nd Ed., 2012)

"A connector box that enables you to connect two dissimilar networks." (Faithe Wempen, "Computing Fundamentals: Introduction to Computers", 2015)

"A node that handles communication between its LAN and other networks" (Nell Dale & John Lewis, "Computer Science Illuminated, 6th Ed.", 2015)

"A system or device that connects two unlike environments or systems. The gateway is usually required to translate between different types of applications or protocols." (Shon Harris & Fernando Maymi, "CISSP All-in-One Exam Guide" 8th Ed., 2018)

"An application that acts as an intermediary for clients and servers that cannot communicate directly. Acting as both client and server, a gateway application passes requests from a client to a server and returns results from the server to the client." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

06 July 2019

🧱IT: Latency (Definitions)

"The fixed cost of servicing a request, such as sending a message or accessing information from a disk. In parallel computing, the term most often is used to refer to the time it takes to send an empty message over the communication medium, from the time the send routine is called to the time the empty message is received by the recipient. Programs that generate large numbers of small messages are sensitive to the latency and are called latency-bound programs." (Beverly A Sanders, "Patterns for Parallel Programming", 2004)

"The amount of time it takes a system to deliver data in response to a request. For mass storage devices, it is the time it takes to place the read or write heads over the desired spot on the media. In networks, it is a function of the electrical and software properties of the network connection." (Tom Petrocelli, "Data Protection and Information Lifecycle Management", 2005)

"The time delay it takes for a network packet to travel from one destination to another." (John Goodson & Robert A Steward, "The Data Access Handbook", 2009)

"The time it takes for a system to respond to an input." (W Roy Schulte & K Chandy, "Event Processing: Designing IT Systems for Agile Companies", 2009)

"A period of time that the computer must wait while a disk drive is positioning itself to read a particular block of data." (Rod Stephens, "Start Here!™ Fundamentals of Microsoft® .NET Programming", 2011)

"The measure of time between two events, such as the initiation and completion of an event, or the read on one system and the write to another system." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"The time period from start to completion of a unit of work." (Max Domeika, "Software Development for Embedded Multi-core Systems", 2011)

"The time it takes to complete a task - that is, the time between when the task begins and when it ends. Latency has units of time. The scale can be anywhere from nanoseconds to days. Lower latency is better in general." (Michael McCool et al, "Structured Parallel Programming", 2012)

"The amount of time lag before a service executes in an environment. Some applications require less latency and need to respond in near real time, whereas other applications are less time-sensitive." (Marcia Kaufman et al, "Big Data For Dummies", 2013)

"A delay. Can apply to the sending, processing, transmission, storage, or receiving of information." (Mike Harwood, "Internet Security: How to Defend Against Attackers on the Web" 2nd Ed., 2015)

"A period of waiting for another component to deliver data needed to proceed." (Faithe Wempen, "Computing Fundamentals: Introduction to Computers", 2015)

"The time it takes for the specified sector to be in position under the read/write head" (Nell Dale & John Lewis, "Computer Science Illuminated" 6th Ed., 2015)

"The delay between when an action such as transmitting data is taken and when it has an effect." (O Sami Saydjari, "Engineering Trustworthy Systems: Get Cybersecurity Design Right the First Time", 2018)

05 July 2019

🧱IT: Automation (Definitions)

"The act of replacing control of a manual process with computer or electronic controls." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

[soft automation:] "automation that is configurable through software without requiring changes to the underlying code of the software itself." (Meredith Zozus, "The Data Book: Collection and Management of Research Data", 2017)

[hard automation:] "automation that requires computer programming to be altered if changes are required." (Meredith Zozus, "The Data Book: Collection and Management of Research Data", 2017)

[Decision Automation:] "This broad term refers to computerized systems that make decisions and have some capability to independently act upon them. Decision automation refers to using technologies, including computer processing, to make decisions and implement programmed decision processes." (Ciara Heavin & Daniel J Power, "Decision Support, Analytics, and Business Intelligence" 3rd Ed., 2017)

"Automation is machine-controlled execution of actions, based on artificial intelligence and machine learning that do not require human intervention. It enables speed to action to help reduce time taken by human operators." (Heru Susanto et al, "Data Security for Connected Governments and Organisations: Managing Automation and Artificial Intelligence", 2021)

"refers to the technology where procedures or processes are performed with minimal human intervention. Machines can be configured based on an explicit set of rules or algorithms." (Accenture)

"performing all or part of a set of tasks with a machine rather than through human effort (NRC 1998)

[Intelligent Automation:] "refers to an automation solution that is enhanced with cognitive capabilities that enable programs and machines to learn, interpret and respond." (Accenture)

04 July 2019

🧱IT: Artifact (Definitions)

 "a design technique used to represent referential integrity in the DSS environment." (William H Inmon, "Building the Data Warehouse", 2005)

"A tangible object produced by an activity. Examples are specifications, design documents, audit records, code, data, reports, plans, schedules, and training courses. The object can be a product component or a work product." (Richard D Stutzke, "Estimating Software-Intensive Systems: Projects, Products, and Processes", 2005)

"A document, model, file, diagram, or other item that is produced, modified, or used during the development, operation, or support of a system." (Pramod J Sadalage & Scott W Ambler, "Refactoring Databases: Evolutionary Database Design", 2006)

"A tangible form of objective evidence indicative of work being performed that is a direct or indirect result of implementing a People CMM model practice." (Sally A Miller et al, "People CMM: A Framework for Human Capital Management" 2nd Ed., 2009)

"An object made or modified by a human." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"Description of a part of the architecture; generally organized into catalogs (lists of objects), matrices (which include the relationships between objects), and diagrams (graphical representations)." (Gilbert Raymond & Philippe Desfray, "Modeling Enterprise Architecture with TOGAF", 2014)

"In a UML deployment diagram, a file, a script, an executable program or another item that is deployed. In development models, something generated by the model such as a requirements document, user story, or piece of code." (Rod Stephens, "Beginning Software Engineering", 2015)

"A physical or digital result from an interaction or transaction. Example: a receipt is an artifact of a transaction." (Gregory Lampshire, "The Data and Analytics Playbook", 2016)

"Any object created by human beings with the intent to be of subsequent use, either as a reference or something that could be improved as part of an effort to enhance it." (David K Pham, "From Business Strategy to Information Technology Roadmap", 2016)

"In research, any apparent effect of a major conceptual variable that is actually the result of a confounding variable that has not been properly controlled. Artifacts threaten the validity of research conclusions." (K  N Krishnaswamy et al, "Management Research Methodology: Integration of Principles, Methods and Techniques", 2016)

"An entity that is used or produced by a software development process. Examples of artifacts are models, source files, scripts, and binary executable files." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

🧱IT: Expert System (Definitions)

"A data processing system composed of a knowledge base (rules), an inference engine, and a working memory." (Joseph P Bigus, "Data Mining with Neural Networks: Solving Business Problems fromApplication Development to Decision Support", 1996)

"A computer system that tries to simulate a human expert. A search tree and method of traversal in artificial intelligence. The expert provides her knowledge as if-then rules and a programmer codes these in software. Expert systems define a large logic tree or several small trees. The expert system has two parts: the knowledge base and the inference engine. The knowledge base is just the tree or trees of bivalent rules. The inference engine is some scheme for reasoning or 'chaining' the rules. Fuzzy systems are a type of expert system since they too store knowledge as rules, but as fuzzy rules or fuzzy patches. Expert systems work with black-white logic and symbols. Fuzzy systems work with fuzzy sets and have a numerical or mathematical basis that permits both mathematical analysis and simple chip design." (Guido Deboeck & Teuvo Kohonen (Eds), "Visual Explorations in Finance with Self-Organizing Maps 2nd Ed.", 2000)

"A computer program that has a deep understanding of a topic, and can simulate a human expert, asking and answering questions and making decisions." (Craig F Smith & H Peter Alesso, "Thinking on the Web: Berners-Lee, Gödel and Turing", 2008)

"An artificial intelligence system driven by rules based on the skills and experience of one or more experts in a given field, so the system processes information the same way an expert person does. Expert systems are deterministic, versus neural networks, which are non-deterministic." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"A software system based on the knowledge of human experts" (Nell Dale & John Lewis, "Computer Science Illuminated" 6th Ed., 2015)

"Fall under the computer applications category of artificial intelligence. Composed of a knowledge base, an inference system, and a human machine interface." (Joan C Dessinger, "Fundamentals of Performance Improvement 3rd Ed", 2012)

"A computer system that emulates the decision-making ability of a human expert. Inference in expert systems applies logical rules to a knowledge base and deduces new knowledge from it." (Accenture)

03 July 2019

🧱IT: Interprocess Communication (Definitions)

"A method of letting threads and processes transfer data and messages among themselves; used to offer services to and receive services from other programs (for example, named pipes)." (Owen Williams, "MCSE TestPrep: SQL Server 6.5 Design and Implementation", 1998)

"A system by which threads and processes can transfer data and messages among themselves. Interprocess communication (IPC) is used to offer and receive services from other programs." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"A mechanism through which operating system processes and threads exchange data and messages. IPCs include local mechanisms, such as Windows shared memory, or network mechanisms, such as Windows Sockets." (Anthony Sequeira & Brian Alderman, "The SQL Server 2000 Book", 2003)

"A mechanism of an operating system that allows processes to communicate with each other within the same computer or over a network." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

"The ability of one task or process to communicate with another in a multitasking operating system. Common methods include pipes, semaphores, shared memory, queues, signals, and mailboxes." (Microsoft, "SQL Server 2012 Glossary", 2012)

"A tool designed for developers to allow communication and sharing of data between applications." (Mike Harwood, "Internet Security: How to Defend Against Attackers on the Web 2nd Ed.", 2015)

🧱IT: Redundant Array of Independent Disks [RAID] (Definitions)

"Installation of several disk drives to a system. Some drives contain mirrored information so data is not lost. RAID disk drives can be replaced quickly in cases of disk failure. This technology is good for Web and database servers, so that no information is lost and the information is always available." (Patrick Dalton, "Microsoft SQL Server Black Book", 1997)

"Sometimes referred to as redundant array of inexpensive disks, a system that uses multiple disk drives (an array) to provide performance and reliability. There are six levels describing RAID arrays, 0 through 5. Each level uses a different algorithm to implement fault tolerance." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"A disk system that comprises multiple disk drives (an array) to provide higher performance, reliability, storage capacity, and lower cost. Fault-tolerant arrays are categorized in six RAID levels: 0 through 5. Each level uses a different algorithm to implement fault tolerance." (Thomas Moore, "EXAM CRAM™ 2: Designing and Implementing Databases with SQL Server 2000 Enterprise Edition", 2005)

"A specific fault-tolerant disk array system design strategy that takes into account issues of cost benefit, reliability, and performance. It can be implemented at a hardware or a software level; each provides a different profile of cost, reliability, and performance. Depending on the person defining RAID, the word independent may be substituted with inexpensive." (Allan Hirt et al, "Microsoft SQL Server 2000 High Availability", 2004)

"A bunch of small, cheap disks. A RAID array is a group of disks used together as a single unit logical disk. RAID arrays can help with storage capacity, recoverability and performance, using what are called mirroring and striping. Mirroring creates duplicate copies of all physical data. Striping breaks data into many small pieces, where those small pieces can be accessed in parallel." (Gavin Powell, "Beginning Database Design", 2006)

"A schema for using groups of disks to increase performance, protect data, or both." (Tom Petrocelli, "Data Protection and Information Lifecycle Management", 2005)

"This is a grouping, or array, of hard disks that appear as a single, logical drive to the operating system." (Joseph L Jorden & Dandy Weyn, "MCTS Microsoft SQL Server 2005: Implementation and Maintenance Study Guide - Exam 70-431", 2006)

"RAID is an acronym for Redundant Array of Independent Disks. RAID is a collection of disks that operates as a single disk." (S. Sumathi & S. Esakkirajan, "Fundamentals of Relational Database Management Systems", 2007)

"A RAID array uses multiple physical disks to simulate one logical, larger disk, often with protection from disk failure. (The I can also stand for Independent, and the D can also stand for Drives.) " (Victor Isakov et al, "MCITP Administrator: Microsoft SQL Server 2005 Optimization and Maintenance (70-444) Study Guide", 2007)

"Using more disks than is necessary for the actual data itself, as a buffer against failure of one (or possibly more) disks." (David G Hill, "Data Protection: Governance, Risk Management, and Compliance", 2009)

"A category of disk drives that employ two or more drives in combination for fault tolerance and performance." (Martin Oberhofer et al, "The Art of Enterprise Information Architecture", 2010)

"A system of disk storage where data is distributed across several drives for faster access and improved fault tolerance." (Paulraj Ponniah, "Data Warehousing Fundamentals for IT Professionals", 2010)

"A technology for configuring a logical data storage device across multiple physical devices to improve performance, availability or both. The primary goal is fault tolerance as in most configurations data can be recovered after a device failure and in some cases, without interruption." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"An acronym that means Redundant Array of Independent Disks. RAID is used to provide balance between performance and fault tolerance. RAID systems use multiple disks to create virtual disks (storage volumes) formed by several individual disks. RAID systems provide performance improvement and fault tolerance." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management" 9th Ed., 2011)

"A category of disk drives that employ two or more drives in combination to deliver fault tolerance and improved performance." (Craig S Mullins, "Database Administration", 2012)

"A multi-disk storage system that optimizes performance, data safety, or both, depending on the type." (Faithe Wempen, "Computing Fundamentals: Introduction to Computers", 2015)

02 July 2019

🧱IT: Peer-to-Peer Network (Definitions)

[peer-to-peer computing:] "Users loosely connected through online connections that enable them to share data and programs." (Greg Perry, "Sams Teach Yourself Beginning Programming in 24 Hours 2nd Ed.", 2001)

[peer-to-peer computing:] "A distributed computing model in which each node has equal standing among the collection of nodes. In the most typical usage of this term, the same capabilities are offered by each node, and any node can initiate a communication session with another node. This contrasts with, for example, client-server computing. The capabilities that are shared in peer-to-peer computing include file-sharing as well as computation." (Beverly A Sanders, "Patterns for Parallel Programming", 2004)

"A network comprised of individual participants that have equal capabilities and duties." (Andy Walker, "Absolute Beginner’s Guide To: Security, Spam, Spyware & Viruses", 2005)

"A blanket term used to describe: (1) a peer-centric distributed software architecture, (2) a flavor of software that encourages collaboration and file sharing between peers, and (3) a cultural progression in the way humans and applications interact with each other that emphasizes two way interactive 'conversations' in place of the Web’s initial television-like communication model (where information only flows in one direction)." (Craig F Smith & H Peter Alesso, "Thinking on the Web: Berners-Lee, Gödel and Turing", 2008)

"A networking system in which nodes in a network exchange data directly instead of going through a central server. " (Marcia Kaufman et al, "Big Data For Dummies", 2013)

"A network where all computers can both share and acces resources from other computers on the same network; a decentralized network." (Faithe Wempen, "Computing Fundamentals: Introduction to Computers", 2015)

"A type of network in which a group of personal computers is interconnected so that the hard disks, CD ROMs, files, and printers of each computer can be accessed from every other computer on the network. Peer-to-peer networks do not have a central file server. This type of system is used if less than a dozen computers will be networked." (James R Kalyvas & Michael R Overly, "Big Data: A Businessand Legal Guide", 2015)

"A decentralized network where participants have equal privileges and make certain resources directly available to other network participants." (AICPA)

🧱IT: Subject Matter Expert [SME] (Definitions)

"An expert on the subject of the area on which the data analysis or mining exercise is focused." (Glenn J Myatt, "Making Sense of Data: A Practical Guide to Exploratory Data Analysis and Data Mining", 2006)

"SME is an acronym used to refer to people who understand and can explain information related to a knowledge domain. SMEs may be technically or business-oriented." (Laura Sebastian-Coleman, "Measuring Data Quality for Ongoing Improvement ", 2012)

"The subject matter expert is the business representative with the required understanding of the existing business environments and of the requirements." (Claudia Imhoff et al, "Mastering Data Warehouse Design", 2003)

"An individual with a large amount of knowledge about one or more areas of subject matter in an organization." (Margaret Y Chu, "Blissful Data ", 2004)

"A person with significant experience and knowledge of a given topic or function." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"A person with deep knowledge of a particular topical area. SMEs can be useful in the consultation phase of the taxonomy design process." (Robert F Smallwood, "Managing Electronic Records: Methods, Best Practices, and Technologies", 2013)

"A person, usually an accomplished performer, who knows the knowledge, performance, and personal competence required for a given unit of competence." (Project Management Institute, "Project Manager Competency Development Framework" 3rd Ed., 2017)

"The individuals who are brought on to the project as needed based on their subject matter expertise." (Timothy J  Kloppenborg et al, "Project Leadership", 2003)

"IT professionals within a vendor organisation who have expertise in their field and are often used by a training team to deliver advanced topics to students or assist in content development. " (BCS Learning & Development Limited, "CEdMA Europe", 2019)

01 July 2019

🧱IT: Data Models (Definitions)

"A system data model is a collection of the information being addressed by a specific system or function such as a billing system, data warehouse, or data mart. The system model is an electronic representation of the information needed by that system. It is independent of any specific technology or DBMS environment." (Claudia Imhoff et al, "Mastering Data Warehouse Design", 2003)

"The business data model, sometimes known as the logical data model, describes the major things ('entities') of interest to the company and the relationships between pairs of these entities. It is an abstraction or representation of the data in a given business environment, and it provides the benefits cited for any model. It helps people envision how the information in the business relates to other information in the business ('how the parts fit together')." (Claudia Imhoff et al, "Mastering Data Warehouse Design", 2003)

"The technology data model is a collection of the specific information being addressed by a particular system and implemented on a specific platform." (Claudia Imhoff et al, "Mastering Data Warehouse Design", 2003)

[enterprise data model:] "A high-level, enterprise-wide framework that describes the subject areas, sources, business dimensions, business rules, and semantics of an enterprise." (Sharon Allen & Evan Terry, "Beginning Relational Data Modeling 2nd Ed.", 2005)

[canonical data model:] "The definition of a standard organization view of a particular information subject. To be practical, canonical data models include a mapping back to each application view of the same subject." (David Lyle & John G Schmidt, "Lean Integration", 2010)

[hierarchical data model:] "A data model that represents data in a tree-like structure of only one-to-many relationships, where each entity may have a ‘many’ side when related to a parent, and a ‘one’ side when related to a child." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

[Enterprise Data Model (EDM):] "A conceptual data model or logical data model providing a common consistent view of shared data across the enterprise, however that is defined, at a point in time. It is common to use the term to mean a high-level, simplified data model, but that is a question of abstraction for presentation." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

[network data model:] "A representation of objects and their participation in one or more owner-member sets. In such a model, a both owners and members may participate in multiple sets, affecting a network of objects and relationships." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

[enterprise data model:] "A single data model that comprehensively describes the data needs of the entire organization." (Craig S Mullins, "Database Administration", 2012)

[generic data model:] "A data model of an industry, rather than of a specific company; a generic data model can be used as a template that can be customized for a given company within the industry that has been modeled." (Daniel Linstedt & W H Inmon, "Data Architecture: A Primer for the Data Scientist", 2014)

[corporate data model:] "A data model at the corporate level for the core business objects and their relationship with each other, which is based on the core business object model." (Boris Otto & Hubert Österle, "Corporate Data Quality", 2015)

[Enterprise data model:] "The enterprise data model in many literatures and viewpoints is considered to be a single, standalone unified artifact that describes all data entities and data attributes and their relationships across the enterprise. In most cases this model is combined with the ambition to consolidate all data in an enterprise data warehouse (EDW)." (Piethein Strengholt, "Data Management at Scale", 2020)

🧱IT: User Datagram Protocol (Definitions)

 "Network standard that does not check for errors, and as a result, has less overhead and is faster than a connection-oriented protocol such as TCP. With UDP, the quality of the transmission is sacrificed for speed." (Linda Volonino & Efraim Turban, "Information Technology for Management" 8th Ed., 2011)

"A core protocol of the Internet Protocol suite. UDP is a connectionless protocol, which provides no guarantee of delivery." (Weiss, "Auditing IT Infrastructures for Compliance" 2nd Ed., 2015)

"Connectionless, unreliable transport layer protocol, which is considered a 'best effort' protocol." (Adam Gordon, "Official (ISC)2 Guide to the CISSP CBK 4th" Ed., 2015)

"An unreliable network protocol that is fast and efficient in which data is transmitted once to a recipient, but the transmitter is not guaranteed delivery." (O Sami Saydjari, "Engineering Trustworthy Systems: Get Cybersecurity Design Right the First Time", 2018)

"A connectionless unreliable protocol. UDP describes a network data connection based on datagrams with little packet control." (Daniel Leuck et al, "Learning Java" 5th Ed., 2020)

🧱IT: System (Definitions)

"A combination of components working together. For example, a computer system includes both hardware and software." (Timothy J  Kloppenborg et al, "Project Leadership", 2003)

"A part of the world that is the subject of a model, communication, or reasoning. In the context of this book the word system is most commonly used for a software system." (Anneke Kleppe et al, "MDA Explained: The Model Driven Architecture: Practice and Promise", 2003)

"A collection of interrelated components that operate together to achieve some desired function, and support organizational mission or business objectives. A system may include software, hardware, and operational data. A system is operated by people. The word system often connotes a large product designed to meet the needs of a particular group of users." (Richard D Stutzke, "Estimating Software-Intensive Systems: Projects, Products, and Processes", 2005)

"An integrated group of subsystems, subassemblies, and components that make up a functioning unit that harmonizes the mass, energy and information flows, and transformations of the elements to provide an overall product output that fulfills the customer-based requirements." (Clyde M Creveling, "Six Sigma for Technical Processes: An Overview for R Executives, Technical Leaders, and Engineering Managers", 2006)

"[...] a business system (not just the computer or the software system)." (Suzanne Robertson & James Robertson, "Mastering the Requirements Process" 2nd Ed., 2006)

"Product or product component that in turn consists of interacting subsystems." (Lars Dittmann et al, "Automotive SPICE in Practice", 2008)

"A system is a compound entity (i.e. has parts within) that produces results above and beyond the sum of the contributions of its parts. Systems produce results determined by their structure. Systems mayor may not have been consciously designed." (Aldo Romano & Giustina Secundo (Eds.),, "Dynamic Learning Networks: Models and Cases in Action", 2009)

"A combination of hardware, software, and data devices." (Janice M Roehl-Anderson, "IT Best Practices for Financial Managers", 2010)

"A mechanism type that consists of one or more linked computers, along with associated software." (David C Hay, "Data Model Patterns: A Metadata Map", 2010)

"A set of interacting or interdependent computer hardware and software components forming an integrated unit. In this book, the terms system and application are often used interchangeably. A business system supports capabilities in a particular business domain (such as finance, marketing, manufacturing, sales, etc.), whereas an integration system supports capabilities in a particular integration discipline (such as data integration, process integration, data quality, business intelligence, etc.). See also the definition for system-of-systems." (David Lyle & John G Schmidt, "Lean Integration", 2010)

"An interacting and interdependent group of component items forming a unified whole to achieve a common purpose." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"A system is a set of connected things forming a complex whole (NOAD). Systems are often understood as mechanisms, but they can also comprise a set of principles or a method for doing things. In modern organizations, the term system is used to refer to information technology applications that carry out work needed by an organization. An example is a system for processing claims or one for inventorying products." (Laura Sebastian-Coleman, "Measuring Data Quality for Ongoing Improvement ", 2012)

"Connecting aspects of a complex whole; interconnectedness; interdependence." (Joan C Dessinger, "Fundamentals of Performance Improvement 3rd Ed", 2012)

"A collection of various components that together can produce results not obtainable by the components alone." (Project Management Institute, "Navigating Complexity: A Practice Guide", 2014)

"A collection of components that operate together to achieve a larger function." (O Sami Saydjari, "Engineering Trustworthy Systems: Get Cybersecurity Design Right the First Time", 2018)

"A system is a framework that orders and sequences activity within the organisation to achieve a purpose within a band of variance that is acceptable to the owner of the system.  Systems are the organisational equivalent of behaviour in human interaction. Systems are the means by which organisations put policies into action.  It is the owner of a system who has the authority to change it, hence his or her clear acceptance of the degree of variation generated by the existing system." (Catherine Burke et al, "Systems Leadership" 2nd Ed., 2018)

"An integrated set of regularly interacting or interdependent components created to accomplish a defined objective, with defined and maintained relationships among its components, and the whole producing or operating better than the simple sum of its components." (Project Management Institute, "Practice Standard for Scheduling"  3rd Ed., 2019)

"A collection of components organized to accomplish a specific function or set of functions." (IEEE 610)

06 June 2019

ITIL: Change Request (Definitions)

"Any request submitted by a customer (buyer or users) for a change to alter the system. These appear in various forms and include software trouble reports (STRs) and baseline change requests (BCRs). BCRs request changes to the system’s specification." (Richard D Stutzke, "Estimating Software-Intensive Systems: Projects, Products, and Processes", 2005)

"Requests to expand or reduce the project scope, modify policies, processes, plans, or procedures, modify costs or budgets, or revise schedules." (Project Management Institute, "Practice Standard for Project Estimating", 2010)

"A document requesting a change for a project." (Bonnie Biafore, "Successful Project Management: Applying Best Practices and Real-World Techniques with Microsoft® Project", 2011)

"An official document requesting modification of existing features, requirements or functions or new ones. Change Request should contain description of the current solution, justification for a change and suggested (desired) solution." (IQBBA, "Standard glossary of terms used in Software Engineering", 2011)

"An appeal to the change review board for a modification to the project by any stakeholder. The request typically includes the justification, relationship of the change to the project goal and objectives, description of the change and deliverables, and effect on project risk." (Bonnie Biafore & Teresa Stover, "Your Project Management Coach: Best Practices for Managing Projects in the Real World", 2012)

"Change management notification for change in current process/environment." (Bill Holtsnider & Brian D Jaffe, "IT Manager's Handbook" 3rd Ed, 2012)

"A formal proposal to modify any document, deliverable, or baseline." (For Dummies, "PMP Certification All-in-One For Dummies" 2nd Ed., 2013)

"1. Written request or proposal to perform a specific change for a development product or to allow it to be implemented. 2. A request to change some software artifact due to a change in requirements." (Tilo Linz et al, "Software Testing Foundations" 4th Ed, 2014)

"Change management notification for a change in current process/environment." (Bill Holtsnider & Brian D Jaffe, "IT Manager's Handbook" 3rd Ed", 2012)

15 May 2019

#️⃣Software Engineering: Programming (Part XV: Rapid Prototyping - Introduction)

Software Engineering
Software Engineering Series

Rapid (software) prototyping (RSP) is a group of techniques applied in Software Engineering to quickly build a prototype (aka mockup, wireframe) to verify the technical or factual realization and feasibility of an application architecture, process or business model. A similar notion is the one of Proof-of-Concept (PoC), which attempts to demonstrate by building a prototype, starting an experiment or a pilot project that a technical concept, business proposal or theory has practical potential. In other words in Software Engineering a RSP encompasses the techniques by which a PoC is lead.

In industries that consider physical products a prototype is typically a small-scale object made from inexpensive material that resembles the final product to a certain degree, some characteristics, details or features being completely ignored (e.g. the inner design, some components, the finishing, etc.). Building several prototypes is much easier and cheaper than building the end product, they allowing to play with a concept or idea until it gets close to the final product. Moreover, this approach reduces the risk of ending up with a product nobody wants.

A similar approach and reasoning is used in Software Engineering as well. Building a prototype allows focusing at the beginning on the essential characteristics or aspects of the application, process or (business) model under consideration. Upon case one can focus on the user interface (UI) , database access, integration mechanism or any other feature that involves a challenge. As in the case of the UI one can build several prototypes that demonstrate different designs or architectures. The initial prototype can go through a series of transformations until it reaches the desired form, following then to integrate more functionality and refine the end product gradually. This iterative and incremental approach is known as rapid evolutional prototyping.

A prototype is useful especially when dealing with the uncertainty, e.g. when adopting (new) technologies or methodologies, when mixing technologies within an architecture, when the details of the implementation are not known, when exploring an idea, when the requirements are expected to change often, etc. Building rapidly a prototype allows validating the requirements, responding agilely to change, getting customers’ feedback and sign-off as early as possible, showing them what’s possible, how the future application can look like, and this without investing too much effort. It’s easier to change a design or an architecture in the concept and design phases than later.

In BI prototyping resumes usually in building queries to identify the source of the data, reengineer the logic from the business application, prove whether the logic is technically feasible, feasibility being translate in robustness, performance, flexibility. In projects that have a broader scope one can attempt building the needed infrastructure for several reports, to make sure that the main requirements are met. Similarly, one can use prototyping to build a data warehouse or a data migration layer. Thus, one can build all or most of the logic for one or two entities, resolving the challenges for them, and once the challenges solved one can go ahead and integrate gradually the other entities.

Rapid prototyping can be used also in the implementation of a strategy or management system to prove the concepts behind. One can start thus with a narrow focus and integrate more functions, processes and business segments gradually in iterative and incremental steps, each step allowing to integrate the lesson learned, address the risks and opportunities, check the progress and change the direction as needed.

Rapid prototyping can prove to be a useful tool when given the chance to prove its benefits. Through its iterative and incremental approaches it allows to reach the targets efficiently


13 May 2019

#️⃣Software Engineering: Programming (Part XIV: Good Programmer, Bad Programmer)

Software Engineering
Software Engineering Series

The use of denominations like 'good' or 'bad' related to programmers and programming carries with it a thin separation between these two perceptional poles that represent the end results of the programming process, reflecting the quality of the code delivered, respectively the quality of a programmer’s effort and  behavior as a whole. This means that the usage of the two denominations is often contextual, 'good' and 'bad' being moving points on a imaginary value scale with a wide range of values within and outside the interval determined by the two.

The 'good programmer' label is a idealization of the traits associated with being a programmer – analyzing and understanding the requirements, filling the gaps when necessary, translating the requirements in robust designs, developing quality code with a minimum of overwork, delivering on-time, being able to help others, to work as part of a (self-organizing) team and alone, when the project requires it, to follow methodologies, processes or best practices, etc. The problem with such a definition is that there's no fix limit, considering that programmer’s job description can include an extensive range of requirements.

The 'bad programmer' label is used in general when programmers (repeatedly) fail to reach others’ expectations, occasionally the labeling being done independently of one’s experience in the field. The volume of bugs and mistakes, the fuzziness of designs and of the code written, the lack of comments and documentation, the lack of adherence to methodologies, processes, best practices and naming conventions are often considered as indicators for such labels. Sometimes even the smallest mistakes or the wrong perceptions of one’s effort and abilities can trigger such labels.

Labeling people as 'good' or 'bad' has the tendency of reinforcing one's initial perception, in extremis leading to self-fulfilling prophecies - predictions that directly or indirectly cause themselves to become true, by the very terms on how the predictions came into being. Thus, when somebody labels another as 'good' or 'bad' he more likely will look for signs that reinforce his previous believes. This leads to situations in which "good" programmers’ mistakes are easier overlooked than 'bad' programmers' mistakes, even if the mistakes are similar.

A good label can in theory motivate, while a bad label can easily demotivate, though their effects depend from person to person. Such labels can easily become a problem for beginners, because they can easily affect beginners' perception about themselves. It’s so easy to forget that programming is a continuous learning process in which knowledge is relative and highly contextual, each person having strengths and weaknesses.

Each programmer has a particular set of skills that differentiate him from other programmers. Each programmer is unique, aspect reflected in the code one writes. Expecting programmers to fit an ideal pattern is unrealistic. Instead of using labels one should attempt to strengthen the weaknesses and make adequate use of a person’s strengths. In this approach resides the seeds for personal growth and excellence.

There are also programmers who excel in certain areas - conceptual creativity, ability in problem identification, analysis and solving, speed, ingenuity of design and of making best use of the available tools, etc. Such programmers, as Randall Stross formulates it, “are an order of magnitude better” than others. The experience and skills harnessed with intelligence have this transformational power that is achievable by each programmer in time.

Even if we can’t always avoid such labeling, it’s important to become aware of the latent force the labels carry with them, the effect they have on our colleagues and teammates. A label can easily act as a boomerang, hitting us back long after it was thrown away.


Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 25 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.