Showing posts with label reliability. Show all posts
Showing posts with label reliability. Show all posts

01 February 2021

📦Data Migrations (DM): Quality Assurance (Part IV: Quality Acceptance Criteria IV)

Data Migration
Data Migrations Series

Reliability

Reliability is the degree to which a solution performs its intended functions under stated conditions without failure. In other words, a DM is reliable if it performs what was intended by design. The data should be migrated only when migration’s reliability was confirmed by the users as part of the sign-off process. The dry-runs as well the final iteration for the UAT have the objective of confirming solution’s reliability.

Reversibility

Reversibility is the degree to which a solution can return to a previous state without starting the process from the beginning. For example, it should be possible to reverse the changes made to a table by returning to the previous state. This can involve having a copy of the data stored respectively deleting and reloading the data when necessary. 

Considering that the sequence in which the various activities is fix, in theory it’s possible to address reversibility by design, e.g. by allowing to repeat individual steps or by creating rollback points. Rollback points are especially important when loading the data into the target system. 

Robustness

Robustness is the degree to which the solution can accommodate invalid input or environmental conditions that might affect data’s processing or other requirements (e,g. performance). If the logic can be stabilized over the various iterations, the variance in data quality can have an important impact on a solutions robustness. One can accommodate erroneous input by relaxing schema’s rules and adding further quality checks.

Security 

Security is the degree to which the DM solution protects the data so that only authorized people have access to the respective data to the defined level of authorization as data are moved through the solution. The security provided by a solution needs to be considered against the standards and further requirements defined within the organization. In case no such standards are available, one can in theory consider the industry best practices.

Scalability

Scalability is the degree to which the solution is able to respond to an increased workload.  Given that the number of data considered during the various iterations vary in volume, a solution’s scalability needs to be considered in respect to the volume of data to be migrated.  

Standardization

Standardization is the degree to which technical standards were implemented for a solution to guarantee certain level of performance or other aspects considered as import. There can be standards for data storage, processing, access, transportation, or other aspects associated with the migration processes. Moreover, especially when multiple DMs are in scope, organizations can define a set of standards and guidelines that should be further considered.  

Testability

Testability is the degree to which a solution can be tested in the respect to the set of functional and data-related requirements. Even if for the success of a migration are important the data in their final form, to achieve that is needed to validate the logic and test thoroughly the transformations performed on the data. As the data go trough the data pipelines, they need to be tested in the critical points – points where the data suffer important transformations. Moreover, one can consider record counters for the records processed in each such critical point, to assure that no record was lost in the process.  

Traceability

Traceability is the degree to which the changes performed on the data can be traced from the target to the source systems as record, respectively at entity level. In theory, it’s enough to document the changes at attribute level, though upon case it might needed to document also the changes performed on individual values. 

Mappings at attribute level allow tracing the data flow, while mappings at value level allow tracing the changes occurrent within values. 

05 January 2021

🧮ERP: Planning (Part II: It’s all about Scope - Nonfunctional Requirements & MVP))

ERP Implementation

Nonfunctional Requirements

In contrast to functional requirements (FRs), nonfunctional requirements (NFRs) have no direct impact on system’s behavior, affecting end-users’ experience with the system, resuming thus to topics like performance, usability, reliability, compatibility, security, monitoring, maintainability, testability, respectively other constraints and quality attributes. Even if these requirements are in general addressed by design, the changes made to the system have the potential of impacting users’ experience negatively.  

Moreover, the NFRs are usually difficult to quantify, and probably that’s why they are seldom made explicit in a formal document or are considered eventually only at high level. However, one can still find a basis for comparison against compliance requirements, general guidelines, standards, best practices or the legacy system(s) (e.g. the performance should not be worse than in the legacy system, the volume of effort for carrying the various activities should not increase). Even if they can’t be adequately described, it’s recommended to list the NFRs in general terms in a formal document (e.g. implementation contract). Failing to do so can open or widen the risk exposure one has, especially when the system lacks important support in the respective areas. In addition, these requirements need to be considered during testing and sign-off as well. 

Minimum Viable Product (MVP)

Besides gaps’ consideration in respect to FRs, it’s important to consider sometimes on whether the whole functionality is mandatory, especially when considering the various activities that need to be carried out (parametrization, Data Migration).

For example, one can target to implement a minimum viable product (MVP) - a version of the product which has just enough features to cover the mandatory or the most important FRs. The MVP is based on the idea that implementing about 80% of the needed functionality has in theory the potential of providing earlier a usable product with a minimum of effort (quick wins), assure that project’s goals and objectives were met, respectively assure a basis for further development. In case of cost overruns, the MVP assures that the business has a workable product and has the opportunity of deciding whether it’s worth of investing more into the project now or later. 

The MVP allows also to get early users’ feedback and integrate it into further enhancements and developments. Often the users understand the capabilities of a system, respectively implementation, only when they are able using the system. As this is a learning process, the learning period can take up to a few months until adequate feedback is available. Therefore, postponing implementation’s continuation with a few months can have in theory a positive impact, however it can come also with drawbacks (e.g. the resources are not available anymore). 

A sketch of the MVP usually results from requirements’ prioritization, however then requirements need to be regarded holistically, as there can be different levels of dependencies existing between them. In addition, different costs can incur if the requirements will be handled later, and other constrains may apply as well. Considering an MVP approach can be a sword with two edges. In the worst-case scenario, the business will get only the MVP, with its good and bad characteristics. The business will be forced then to fill the gaps by working outside the system, which can lead to further effort and, in extremis, with poor acceptance of the system. In general, users expect having their processes fully implemented in the system, expectation which is not always economically grounded.

After establishing an MVP one can consider the further requirements (including improvement suggestions) based on a cost-benefit basis and implement them accordingly as part of a continuous improvement initiative, even if more time will be maybe required for implementing the same.

Previous Post <<||>> Next Post

26 August 2019

🛡️Information Security: Denial of Service [DoS] (Definitions)

"A type of attack on a computer system that ties up critical system resources, making the system temporarily unusable." (Tom Petrocelli, "Data Protection and Information Lifecycle Management", 2005)

"Any attack that affects the availability of a service. Reliability bugs that cause a service to crash or hang are usually potential denial-of-service problems." (Mark S Merkow & Lakshmikanth Raghavan, "Secure and Resilient Software Development", 2010)

"This is a technique for overloading an IT system with a malicious workload, effectively preventing its regular service use." (Martin Oberhofer et al, "The Art of Enterprise Information Architecture", 2010)

"Occurs when a server or Web site receives a flood of traffic - much more traffic or requests for service than it can handle, causing it to crash." (Linda Volonino & Efraim Turban, "Information Technology for Management 8th Ed", 2011)

"Causing an information resource to be partially or completely unable to process requests. This is usually accomplished by flooding the resource with more requests than it can handle, thereby rendering it incapable of providing normal levels of service." (Mark Rhodes-Ousley, "Information Security: The Complete Reference, Second Edition" 2nd Ed., 2013)

"Attacks designed to disable a resource such as a server, network, or any other service provided by the company. If the attack is successful, the resource is no longer available to legitimate users." (Darril Gibson, "Effective Help Desk Specialist Skills", 2014)

"An attack from a single attacker designed to disrupt or disable the services provided by an IT system. Compare to distributed denial of service (DDoS)." (Darril Gibson, "Effective Help Desk Specialist Skills", 2014)

"A coordinated attack in which the target website or service is flooded with requests for access, to the point that it is completely overwhelmed." (Faithe Wempen, "Computing Fundamentals: Introduction to Computers", 2015)

"An attack that can result in decreased availability of the targeted system." (Mike Harwood, "Internet Security: How to Defend Against Attackers on the Web" 2nd Ed., 2015)

"An attack that generally floods a network with traffic. A successful DoS attack renders the network unusable and effectively stops the victim organization’s ability to conduct business." (Weiss, "Auditing IT Infrastructures for Compliance" 2nd Ed., 2015)

"A type of cyberattack to degrade the availability of a target system." (O Sami Saydjari, "Engineering Trustworthy Systems: Get Cybersecurity Design Right the First Time", 2018)

"Any action, or series of actions, that prevents a system, or its resources, from functioning in accordance with its intended purpose." (Shon Harris & Fernando Maymi, "CISSP All-in-One Exam Guide" 8th Ed., 2018)

"The prevention of authorized access to resources or the delaying of time-critical operations." (William Stallings, "Effective Cybersecurity: A Guide to Using Best Practices and Standards", 2018)

"An attack shutting down running of a service or network in order to render it inaccessible to its users (whether human person or a processing device)." (Wissam Abbass et al, "Internet of Things Application for Intelligent Cities: Security Risk Assessment Challenges", 2021)

"Actions that prevent the NE from functioning in accordance with its intended purpose. A piece of equipment or entity may be rendered inoperable or forced to operate in a degraded state; operations that depend on timeliness may be delayed." (NIST SP 800-13)

"The prevention of authorized access to resources or the delaying of time-critical operations. (Time-critical may be milliseconds or it may be hours, depending upon the service provided)." (NIST SP 800-12 Rev. 1)

"The prevention of authorized access to a system resource or the delaying of system operations and functions." (NIST SP 800-82 Rev. 2)


01 April 2007

🌁Software Engineering: Failure Modes & Effects Analysis [FMEA] (Definitions)

"A risk-analysis technique that identifies and ranks the potential failure modes of a design or process and then prioritizes improvement actions." (Clyde M Creveling, "Six Sigma for Technical Processes: An Overview for R Executives, Technical Leaders, and Engineering Managers", 2006)

"An approach to analyzing the effect of faults on system reliability." (Bruce P Douglass, "Real-Time Agility: The Harmony/ESW Method for Real-Time and Embedded Systems Development", 2009)

"An analytical procedure in which each potential failure mode in every component of a product is analyzed to determine its effect on the reliability of that component and, by itself or in combination with other possible failure modes, on the reliability of the product or system and on the required function of the component; or the examination of a product (at the system and/or lower levels) for all ways that a failure may occur. For each potential failure, an estimate is made of its effect on the total system and of its impact. In addition, a review is undertaken of the action planned to minimize the probability of failure and to minimize its effects." (For Dummies, "PMP Certification All-in-One For Dummies, 2nd Ed.", 2013)

"Approach that dissects a component into its basic functions to identify flaws and those flaw’s effects." (Adam Gordon, "Official (ISC)2 Guide to the CISSP CBK" 4th Ed., 2015)

"(1) A process for analysis of potential failure modes within a system for classification by severity or determination of the effect of failures on the system. (2) A structured method for analyzing risk by ranking and documenting potential failure mode in a system, design or process. The analysis includes: identification of potential failures and their effects; ranking of factors (e.g., severity, frequency of occurrence, detectability of the potential failures); and identification and results of actions taken to reduce or eliminate risk." (International Aerospace Quality Group)

"A systematic approach to risk identification and analysis of identifying possible modes of failure and attempting to prevent their occurrence." (SQA)

"FMEA (failure modes effects analysis) is a technique used in product life cycle management activities to predict how a product or process might fail and what the effects of that failure might be." (Gartner)

14 February 2007

🌁Software Engineering: Reliability (Definitions)

"[...] the characteristic of an information infrastructure to store and retrieve information in an accessible, secure, maintainable, and fast manner." (Martin J Eppler, "Managing Information Quality 2nd Ed.", 2006)

"The measure of robustness over time. The length of time a product or process performs as intended." (Lynne Hambleton, "Treasure Chest of Six Sigma Growth Methods, Tools, and Best Practices", 2007)

"Reliability describes a product’s ability to maintain its defined functions under defined conditions for a specified period of time." (Lars Dittmann et al, "Automotive SPICE in Practice", 2008)

"A stochastic measure of the likelihood that a system will be able to deliver a service." (Bruce P Douglass, "Real-Time Agility: The Harmony/ESW Method for Real-Time and Embedded Systems Development", 2009)

"The degree to which the new system is perceived as being better than the system it replaces, often expressed in the economic or social status terms that will result from its adoption." (Linda Volonino & Efraim Turban, "Information Technology for Management" 8th Ed, 2011)

"The ability for a component (server, application, database, etc.) or group of components to consistently perform its functions." (Craig S Mullins, "Database Administration", 2012)

"A set of characteristics relating to the ability of the software product to perform its required functions under stated conditions for a specified period of time or for a specified number of operations." (Tilo Linz et al, "Software Testing Foundations" 4th Ed, 2014)

 "It is a characteristic of an item (component or system), expressed by the probability that the item (component/system) will perform its required function under given conditions for a stated time interval." (Harish Garg,  "Predicting Uncertain Behavior and Performance Analysis of the Pulping System in a Paper Industry using PSO and Fuzzy Methodology", 2014)

"A characteristic of an item (component or system), expressed by the probability that the item (component/system) will perform its required function under given conditions for a stated time interval." (Harish Garg, "A Hybrid GA-GSA Algorithm for Optimizing the Performance of an Industrial System by Utilizing Uncertain Data", 2015)

"A sub-set of statistical engineering methodology that predicts performance of a product over its intended life cycle and understanding of the effects of various failure modes on system performance." (Atila Ertas, "Transdisciplinary Engineering Design Process", 2018)

"The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations" (ISO 9126)

"The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations." (ISO/IEC 25000)

"The capability of a system or component to perform its required functions under stated conditions for a specified period of time." (IEEE Std 610.12-1990) 

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.