Showing posts with label quality acceptance criteria. Show all posts
Showing posts with label quality acceptance criteria. Show all posts

01 February 2021

Data Migrations (DM): Quality Acceptance Criteria V

Data Migration

Efficiency 

Efficiency is the degree to which a solution uses the hardware (storage, network) and other organizational resources to fulfill a given task. Data characterized by high volume, velocity, variety and veracity can be challenging to process, requiring upon case more processing power. Therefore, the DM solutions need to consider these aspects as well. However, efficiency refers on whether the available resources are used efficiently – the waste in terms of resource utilization is minimal. 

On the other side the waste of resources can be acceptable when there are other benefits or requirements that need to be considered, respectively when the ratio between resources utilization and effort to built more efficient processes is acceptable.

A DM solution involves iterative and exploratory processes in which knowledge and feedback is integrated in each iteration, therefore it might look like resources are not used efficiently. However, this is a way to handle complexity and uncertainty by breaking the effort in manageable chunks.

Learnability

Learnability is the degree to which a person can become familiar with a solution’s use, the data and the processes associated with it. A DM can be challenging for many technical and non-technical resources as it requires a certain level of skillset and understanding of the requirements, needs and deliverables. The complexity of the data and requirements can be overwhelming, however with appropriate communication and awareness established, the challenges can be overcome. 

Stability

Stability is the degree to which a solution is sensitive to environment changes (e.g. overuse of resource, hardware or software failures, updates), respectively on whether it performs with no performance defects or it does not crash under defined levels of stress. Stability can be monitored during the various phases and countermeasures need to be considered in case the solution is not stable enough (e.g. redesigning the solution, breaking the data in smaller chunks)

Suitability 

Suitability is the degree to which a solutions provides functions that meet the stated and implied needs. No matter how performant and technologically advanced a solution is, it brings less value as long it doesn’t perform what it was intended to do.

Transparency 

Transparency is the degree to which a solution’s stakeholders have access to the requirements, processes, data, documentation, or other information required by them. In a DM transparency is important especially important in respect to the data, logic and rules used in data processing, respectively the number of records processed. 

Trustability

Trustability is the degree to which a solution can be trusted to provide the expected results. Even if the technical team assures that the solution can deliver what was indented, the success of a DM is a matter of perception from stakeholders’ perspective. Providing transparency into the data, rules and processes can improve the level of trust however, special attention need to be given to the issues raised by stakeholders during and after Go-Live, as differences need to be mitigated. 

Understandability 

Understandability is the degree to which the requirements of a solution were understood by the resources involved in terms of what needs to be performed. For the average project resource it might be challenging to understand the implications of a DM, and this can apply to technical as well non-technical resources. Making people aware of the implications is probably one of the most important criteria for success, as the success of a migration is often a matter of perception. 

Usability 

Usability is the degree to which a solution can be used by the targeted users within the agreed context of usage. Ideally DM solutions need to be easy to use, though there are always trade-offs. In the end, a DM must fit the purpose it was built for. 

Previous Post <

Data Migrations (DM): Quality Acceptance Criteria IV

Data Migration
Data Migrations Series

Reliability

Reliability is the degree to which a solution performs its intended functions under stated conditions without failure. In other words, a DM is reliable if it performs what was intended by design. The data should be migrated only when migration’s reliability was confirmed by the users as part of the sign-off process. The dry-runs as well the final iteration for the UAT have the objective of confirming solution’s reliability.

Reversibility

Reversibility is the degree to which a solution can return to a previous state without starting the process from the beginning. For example, it should be possible to reverse the changes made to a table by returning to the previous state. This can involve having a copy of the data stored respectively deleting and reloading the data when necessary. 

Considering that the sequence in which the various activities is fix, in theory it’s possible to address reversibility by design, e.g. by allowing to repeat individual steps or by creating rollback points. Rollback points are especially important when loading the data into the target system. 

Robustness

Robustness is the degree to which the solution can accommodate invalid input or environmental conditions that might affect data’s processing or other requirements (e,g. performance). If the logic can be stabilized over the various iterations, the variance in data quality can have an important impact on a solutions robustness. One can accommodate erroneous input by relaxing schema’s rules and adding further quality checks.

Security 

Security is the degree to which the DM solution protects the data so that only authorized people have access to the respective data to the defined level of authorization as data are moved through the solution. The security provided by a solution needs to be considered against the standards and further requirements defined within the organization. In case no such standards are available, one can in theory consider the industry best practices.

Scalability

Scalability is the degree to which the solution is able to respond to an increased workload.  Given that the number of data considered during the various iterations vary in volume, a solution’s scalability needs to be considered in respect to the volume of data to be migrated.  

Standardization

Standardization is the degree to which technical standards were implemented for a solution to guarantee certain level of performance or other aspects considered as import. There can be standards for data storage, processing, access, transportation, or other aspects associated with the migration processes. Moreover, especially when multiple DMs are in scope, organizations can define a set of standards and guidelines that should be further considered.  

Testability

Testability is the degree to which a solution can be tested in the respect to the set of functional and data-related requirements. Even if for the success of a migration are important the data in their final form, to achieve that is needed to validate the logic and test thoroughly the transformations performed on the data. As the data go trough the data pipelines, they need to be tested in the critical points – points where the data suffer important transformations. Moreover, one can consider record counters for the records processed in each such critical point, to assure that no record was lost in the process.  

Traceability

Traceability is the degree to which the changes performed on the data can be traced from the target to the source systems as record, respectively at entity level. In theory, it’s enough to document the changes at attribute level, though upon case it might needed to document also the changes performed on individual values. 

Mappings at attribute level allow tracing the data flow, while mappings at value level allow tracing the changes occurrent within values. 

Data Migrations (DM): Quality Acceptance Criteria III

Data Migration
Data Migrations Series

Repeatability

Repeatability is the degree with which a DM can be repeated and obtain consistent results between repetitions. Even if a DM is supposed to be a one-time activity for a project, to guarantee a certain level of quality it’s important to consider several iterations in which the data requirements are refined and made sure that the data can be imported as needed into the target system(s). Considered as a process, as long the data and the rules haven’t changed, the results should be the same or have the expected level of deviation from expectations. 

This requirement is important especially for the data migrated during UAT and Go-Live, time during which the input data and rules need to remain frozen (even if small changes in the data can still occur). In fact, that’s the role of UAT – to assure that the data have the expected quality and when compared to the previous dry-run, that it attains the expected level of consistency. 

Reusability

Reusability is the degree to which the whole solution, parts of the logic or data can be reused for multiple purposes. Master data and the logic associated with them have high reusability potential as they tend to be referenced by multiple entities. 

Modularity

Modularity is the degree to which a solution is composed of discrete components such that a change to one component has minimal impact on other components. It applies to the solution itself but also to the degree to which the logic for the various entities is partitioned so to assure a minimal impact. 

Partitionability

Partitionability is the degree to which data or logic can be partitioned to address the various requirements. Despite the assurance that the data will be migrated only once, in practice this assumption can be easily invalidated. It’s enough to increase the system freeze by a few days and/or to have transaction data that suddenly requires master data not considered. Even if the deltas can be migrated in system manually, it’s probably recommended to migrate them using the same logic. Moreover, the performing of incremental loads can be a project requirement. 

Data might need to be partitioned into batches to improve processing’s performance. Partitioning the logic based on certain parameters (e.g. business unit, categorical values) allows more flexibility in handling other requirements (e.g. reversibility, performance, testability, reusability). 

Performance

Performance refers to the degree a piece of software can process data into an amount of time considered as acceptable for the business. It can vary with the architecture and methods used, respectively data volume, veracity, variance, variability, or quality.

Performance is a critical requirement for a DM, especially when considering the amount of time spent on executing the logic during development, tests and troubleshooting, as well for other activities. Performance is important during dry-runs but more important during Go-Live, as it equates with a period during which the system(s) are not available for the users. Upon case, a few hours of delays can have an important impact on the business. In extremis, the delays can sum up to days. 

Predictability

Predictability is the degree to which the results and behavior of a solution, respectively the processes involve are predictable based on the design, implementation or other factors considered (e.g. best practices, methodology used, experience, procedures and processes). Highly predictable solutions are desirable, though reaching the required level of performance and quality can be challenging. 

The results from the dry-runs can offer an indication on whether the data migrated during UAT and Go-Live provide a certain level of assurance that the DM will be a success. Otherwise, an additional dry-run should be planned during UAT, if the schedule allows it.

Previous Post <> Nest Post 

Data Migrations (DM): Quality Acceptance Criteria II

Data Migration
Data Migrations Series

Auditability

Auditability is the degree to which the solution allows checking the data for their accuracy, or for their quality in general, respectively the degree to which the DM solution and processes allow to be audited regarding compliance, security and other types of requirements. All these aspects are important in case an external sign-off from an auditor is mandatory. 

Automation

Automation is the degree to which the activities within a DM can be automated. Ideally all the processes or activities should be automated, though other requirements might be impacted negatively. Ideally, one needs to find the right balance between the various requirements. 

Cohesion

Cohesion is the degree to which the tasks performed by the solution, respectively during the migration, are related to each other. Given the dependencies existing between data, their processing and further project-related activities, DM imply a high degree of cohesion that need to be addressed by design. 

Complexity 

Complexity is the degree to which a solution is difficult to understand given the various processing layers and dependencies existing within the data. The complexity of DM revolve mainly around the data structures and the transformations needed to translate the data between the various data models. 

Compliance 

Compliance is the degree to which a solution is compliant with internal or external regulations that apply. There should be differentiated between mandatory requirements, respectively recommendations and other requirements.

Consistency 

Consistency is the degree to which data conform to an equivalent set of data, in this case the entities considered for the DM need to be consistent to each other. A record referenced in any entity of the migration need to be considered, respectively made available in the target system(s) either by parametrization or migration. 

During each iteration, the data need to remain consistent, so it can facilitate the troubleshooting. The data are usually reimported between iterations or during same iteration, typically to reflect the changes occurred in the source systems or other purposes. 

Coupling 

Data coupling is the degree to which different processing areas within a DM share the same data, typically a reflection of the dependencies existing between the data. Ideally, the areas should be decoupled as much as possible. 

Extensibility

Extensibility is the degree to which the solution or parts of the logic can be extended to accommodate further requirements. Typically, this involves changes that deviate from the standard functionality. Extensibility impacts positively the flexibility.

Flexibility 

Flexibility is the degree to which a solution can handle new requirements or ad-hoc changes to the logic. No matter how good everything was planned there’s always something forgotten or new information is identified. Having the flexibility to change code or data on the fly can make an important difference. 

Integrity 

Integrity is the degree to which a solution prevents the changes to data besides the ones considered by design. Users and processes should not be able modifying the data besides the agreed procedures. This means that the data need to be processed in the sequence agreed. All aspects related to data integrity need to be documented accordingly. 

Interoperability 

Interoperability is the degree to which a solution’s components can exchange data and use the respective data. The various layers of a DM’s solutions must be able to process the data and this should be possible by design. 

Maintainability

Maintainability is the degree to which a solution can be modified to or add minor features, change existing code, corrects issues, refactor code, improve performance or address changes in environment. The data required and the transformation rules are seldom known in advance. The data requirements are definitized during the various iterations, the changes needing to be implemented as the iterations progress. Thus, maintainability is a critical requirement.

Previous Post - Next Post

Data Migrations (DM): Quality Acceptance Criteria I

Data Migration
Data Migrations Series

Introduction

When designing a Data Migration (DM), respectively any software solution, it’s important to take inventory of project’s requirements, evaluate, document, communicate and monitor them accordingly. Each of them can have an important impact on the solution, as a solution’s success will be validated and judged upon them. Therefore, the identified requirements must be considered as baseline for conceptualization, design, implementation and sign-off, and should go through same procedures and rigor as other projects requirements. The existence of a standardized Requirements Management process can facilitate their management through project’s lifecycle. 

The requirements are usually driven by the source and target systems (e.g. data import/export features, data models and their constraints), the environments they are hosted on (e.g. cloud vs. on-premise), respectively the layers in between (e.g. network, firewalls), project and business aspects that need to be considered (e.g. freeze window for the Go-Live, data availability dates, data quality, external dependencies, etc.). They resume to the solution itself as well to the data and processes involved, and are reflected but not limited to the following important aspects, that can be considered upon case also as quality acceptance criteria: 

Accessibility

Accessibility is the degree to which the data are available for a solution so it can be processed when needed, in the form, by resources, or means intended for processing. It’s critical for a DM solution to access or have available the master, transaction, parameter and further data when needed. The team must make sure that the data become easily accessible. 

Unavailability of data can impact the DM and can easily lead to delays in the project. This also means that the various project activities (parametrization, cleansing, enrichment, development) need to be synchronized with the migration activities. 

Upon case, accessibility can involve the solution itself expressed as the degree to which it’s available to the resources supposed to use it. Certain architectural decisions can have impact on the carried activities. As the solution is usually deployed on a server, it can happen that only a limited number of people is able to access it concurrently. Moreover, a DM’s complexity makes the involvement of multiple developers challenging.  

Accountability

Accountability is the degree to which accountability is enforced for the various resources involved in DM processes and related activities. As multiple resources are involved for parametrization, cleaning, processing, validation, software development, each resource needs to be aware about the extent they are accountable for. Without accountability made explicit, there’s the danger that the activities are neglected, with all the implications deriving from it – quality deviations, delays, data unavailability, etc. 

Adaptability

Adaptability is the degree to which a solution can be adapted to environment or requirement changes. Even if typically, the environments don’t change, it doesn’t mean that this will not happen as the IT infrastructure goes through continuous changes that can affect directly or indirectly a migration.  Same can be said about requirements, which however have higher probability to change even late in the process as new knowledge is acquired and needs to be integrated in the solution. 

Atomicity 

Atomicity is the degree to which data entities can be processes at the required level of abstraction in an atomic manner. Even if transformations occur during the various stages, the data belonging to an entity need to be kept and processed together (e.g. Customers and their Addresses). This can involve processing attributes in advance even if the data might be required later. There can be situations in which the data belonging to the same entity need to be processed on different paths, though in the end it’s important to keep the data together, when the processing logic allows it. 

Next Post

15 June 2020

Strategic Management: Quality Acceptance Criteria for Strategies and Concepts

Strategic Management

Quality acceptance criteria for concept documents in general, and for strategies in particular, are not straightforward for all, behind the typical request of completeness hiding other criteria like flexibility, robustness, predictability, implementable, specificity, fact-based, time-boundedness, clearness, comprehensibility or measurability.

Flexible: once the strategy approved, one must be able to change the strategy as may seem fit, especially to address changes in risks, opportunities, goals and objectives, respectively the identification of new facts. A strategy implies a roadmap on how to arrive from the starting point to destination. As the intermediary or final destinations change, the strategy must reflect these changes (and it’s useful to document these changes accordingly). This also implies that the strategy must be periodically reviewed, the new facts accordingly analyzed and decided whether they must be part of the strategy.
Robust: a strategy must handle variability (aka changes) and remain effective (producing the desired/intended results).

Predictable: the strategy needs to embrace the uncertainty and complexity of the world. Even if one can’t predict the future, the strategy must consider the changes foreseen in the industry and technologies. Is not necessarily about imagining the future, even if this would be ideal, but to consider the current trends in the industry.

Implementable: starting with the goals and ending with the roadmap, the strategy must be realistic and address organization’s current, respectively future capabilities. If the organization need to acquire further capabilities, they need to be considered as well.

Specific: the strategy must address the issues, goals and objectives specific for the organization. As long these are not reflected in it, the strategy is more likely to fail. It is true that many of the issues and goals considered can be met in other organizations, however there are always important aspects that need to be made explicit.

Fact-based: the strategy must be based on facts rooted in internal or market analysis, however the strategy is not a research paper to treat in detail the various concepts and findings – definitions, summaries of the findings with their implications, and references to further literature are enough, if needed.

Time-bound:  in contrast to other concepts, the strategy must specify the timeframe considered for its implementation. Typically a strategy addresses a time interval of 3 to 5 years, though upon case, the interval may be contracted or dilated to consider business specifics. The strategy can further break down the roadmap per year or biannually.

Complete: the strategy must be complete in respect to the important topics it needs to address. It’s not only about filing out a template with information, the reader must get a good understanding of what the strategy is about. Complete doesn’t mean perfect, but providing a good enough description of the intent.

Clear: especially when there are competing interests, the strategy must describe what is in scope and what was left out. What was left out is as important as what is considered, including the various presumptions. A test of clearness is whether the why, how, who, when and by what means were adequately considered.

Comprehensible: the targeted audience must be able to read and understand the strategy at the appropriate level of detail or scope.

Measurable: the progress of a strategy must be measurable, and there are two aspects to consider. On one side the goals and objectives considered must be measurable by definition (see SMART criteria), while on the other, one must be able to track the progress and various factors related to it (e.g. implementation costs, impact of the changes made, etc.). Therefore, a strategy must include a set of metrics that will allow quantifying the mentioned aspects.

09 July 2019

IT: Resilience (Definitions)

"The ability to cope with adversity and recover quickly from setbacks." (PMI, "Navigating Complexity: A Practice Guide", 2014)

"System resilience is an ability of the system to withstand a major disruption within acceptable degradation parameters and to recover within an acceptable time." (Denis Čaleta, "Cyber Threats to Critical Infrastructure Protection: Public Private Aspects of Resilience", 2016)

"The ability of an information system to continue to (1) operate under adverse conditions or stress, even if in a degraded or debilitated state, while maintaining essential operational capabilities; and (2) recover to an effective operational posture in a time frame consistent with mission needs." (William Stallings, "Effective Cybersecurity: A Guide to Using Best Practices and Standards", 2018)

"The ability of a project to readily resume from unexpected events, threats or actions." (Phil Crosby, "Shaping Mega-Science Projects and Practical Steps for Success", 2019)

"The ability of an infrastructure to resist, respond and overcome adverse events" (Konstantinos Apostolou et al, "Business Continuity of Critical Infrastructures for Safety and Security Incidents", 2020)

"The word resilience refers to the ability to overcome critical moments and adapt after experiencing some unusual and unexpected situation. It also indicates return to normal." (José G Vargas-Hernández, "Urban Socio-Ecosystems Green Resilience", 2021)

"Adaptive capacity of an organisation in a complex and changing environment’ (ISO Guide 73:2009)

"The ability to resist failure or to recover quickly following a failure" (ITIL)

"The ability of an information system to continue to: (i) operate under adverse conditions or stress, even if in a degraded or debilitated state, while maintaining essential operational capabilities; and (ii) recover to an effective operational posture in a time frame consistent with mission needs." (NIST SP 800-39)

"The ability to prepare for and adapt to changing conditions and withstand and recover rapidly from disruptions. Resilience includes the ability to withstand and recover from deliberate attacks, accidents, or naturally occurring threats or incidents." (NIST SP 800-37)

"The ability to quickly adapt and recover from any known or unknown changes to the environment through holistic implementation of risk management, contingency, and continuity planning." (NIST SP 800-34 Rev. 1)

26 December 2013

Project Management: Laws (Just the Quotes)

"Nothing will ever be attempted if all possible objections must first be overcome." (Samuel Johnson, 1759)

"Remember, if you fail to prepare you are preparing to fail." (H K Williams, 1919)

"If we fail to prepare we prepare to fail." (James H Hope, 1929)

"In anything at all, perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away [...]" (Antoine de Saint Exupéry, "Wind, Sand and Stars", 1939) 

"Work expands so as to fill the time available for its completion." (C Northcote Parkinson, "Parkinson’s Law", 1957) 

"Adding manpower to a late software project makes it later." (Fred Brooks, "The Mythical Man-Month", 1975)

"By failing to plan, you will free very little, if any, time, and by failing to plan you will almost certainly fail […] Exactly because we lack time to plan, we should take time to plan." (Alan Lakein, "How to get control of your time and your life", 1974) 

"Program advocates like to keep bad news covered up until they have spent so much money they can advance the sunk-cost argument; that it's too late to cancel the program because we've spent too much already." (James P Stevenson, "The Pentagon Paradox: The development of the F-18 Hornet", 1993)

Graham's Law: "If they know nothing of what you are doing, they suspect you are doing nothing." (Robert J Graham et al, "The Complete Idiot's Guide to Project Management", 2007) 

O'Brochta's Law: "Project management is about applying common sense with uncommon discipline." (Michael O'Brochta, "Great Project Managers", 2008) 

"No project should be allowed to proceed without clear specification and acceptance  criteria, that are understood by all participants." (Tony Martyr, "Why Projects Fail", 2018)

Augustine's Law: "A bad idea executed to perfection is still a bad idea." (Norman R Augustine)

Cohn's Law: "The more time you spend in reporting on what you are doing, the less time you have to do anything. Stability is achieved when you spend all your time doing nothing but reporting on the nothing you are doing."

Dobbins’ Law: "When in doubt, use a bigger hammer."

Fitzgerald's Law: "There are two states to any large project: Too early to tell and too late to stop." (Ernest Fitzgerald)

Hoggarth's Law: "Attempts to get answers early in a project fail as there are many more wrong questions than right ones. Activity during the early stages should be dedicated to finding the correct questions. Once the correct questions have been identified correct answers will naturally fall out of subsequent work without grief or excitement and there will be understanding of what the project is meant to achieve."

Kinser's Law: "About the time you finish doing something, you know enough to start." (James C Kinser) 

Wolf ’s Law of Management: "The tasks to do immediately are the minor ones; otherwise, you’ll forget them. The major ones are often better to defer. They usually need more time for reflection. Besides, if you forget them, they’ll remind you."

14 February 2007

Software Engineering: Reliability (Definitions)

"[...] the characteristic of an information infrastructure to store and retrieve information in an accessible, secure, maintainable, and fast manner." (Martin J Eppler, "Managing Information Quality 2nd Ed.", 2006)

"The measure of robustness over time. The length of time a product or process performs as intended." (Lynne Hambleton, "Treasure Chest of Six Sigma Growth Methods, Tools, and Best Practices", 2007)

"Reliability describes a product’s ability to maintain its defined functions under defined conditions for a specified period of time." (Lars Dittmann et al, "Automotive SPICE in Practice", 2008)

"A stochastic measure of the likelihood that a system will be able to deliver a service." (Bruce P Douglass, "Real-Time Agility: The Harmony/ESW Method for Real-Time and Embedded Systems Development", 2009)

"The degree to which the new system is perceived as being better than the system it replaces, often expressed in the economic or social status terms that will result from its adoption." (Linda Volonino & Efraim Turban, "Information Technology for Management" 8th Ed, 2011)

"The ability for a component (server, application, database, etc.) or group of components to consistently perform its functions." (Craig S Mullins, "Database Administration", 2012)

"A set of characteristics relating to the ability of the software product to perform its required functions under stated conditions for a specified period of time or for a specified number of operations." (Tilo Linz et al, "Software Testing Foundations" 4th Ed, 2014)

 "It is a characteristic of an item (component or system), expressed by the probability that the item (component/system) will perform its required function under given conditions for a stated time interval." (Harish Garg,  "Predicting Uncertain Behavior and Performance Analysis of the Pulping System in a Paper Industry using PSO and Fuzzy Methodology", 2014)

"A characteristic of an item (component or system), expressed by the probability that the item (component/system) will perform its required function under given conditions for a stated time interval." (Harish Garg, "A Hybrid GA-GSA Algorithm for Optimizing the Performance of an Industrial System by Utilizing Uncertain Data", 2015)

"A sub-set of statistical engineering methodology that predicts performance of a product over its intended life cycle and understanding of the effects of various failure modes on system performance." (Atila Ertas, "Transdisciplinary Engineering Design Process", 2018)

"The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations" (ISO 9126)

"The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations." (ISO/IEC 25000)

"The capability of a system or component to perform its required functions under stated conditions for a specified period of time." (IEEE Std 610.12-1990) 

12 February 2007

Software Engineering: Usability (Definitions)

"The ease with which a user can learn to operate, prepare inputs for, and interpret outputs of a system or component." (IEEE, "IEEE Standard Glossary of Software Engineering Terminology", 1990)

"The characteristic of an information environment to be user-friendly in all its aspects (easy to learn, use, and remember)." (Martin J Eppler, "Managing Information Quality" 2nd Ed., 2006)

"The ability to use an element or work product in a different circumstance or environment." (Bruce P Douglass, "Real-Time Agility", 2009)

"A pragmatic quality characteristic that is a measure of the degree to which the information presentation is directly and efficiently usable for its purpose." (David C Hay, "Data Model Patterns: A Metadata Map", 2010)

"A multifaceted term that refers to how easy it is for users to accomplish whatever task they need to do." (Matt Telles, "Beginning Programming", 2014)

"A questionnaire-based usability test technique for measuring software quality from the end user's point of view. [Kirakowski93]" (Standard Glossary, "ISTQB", 2015)

"Computing the degree to which a software application or a website is easy to use with no specific training. Usability is the art and science of designing systems or web sites that are easy to learn, easy to remember how to use, efficient to use, error tolerant and engaging." (European Commission [Usability Glossary])

"Easiness with which an application, product or service can be used" (ITIL)

"Extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use." (ISO/IEC 9241-11)

"The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions." (ISO 9126, 25000)

"Usability is the degree to which something - software, hardware or anything else - is easy to use and a good fit for the people who use it." (Usability BoK)

10 February 2007

Software Engineering: Trustworthiness (Definitions)

"Having reliable, appropriate, and validated levels of security." (Mark Rhodes-Ousley, "Information Security: The Complete Reference" 2nd Ed., 2013)

"Worthy of being trusted to have certain specified properties." (O Sami Saydjari, "Engineering Trustworthy Systems: Get Cybersecurity Design Right the First Time", 2018)

"The perception and confidence in the quality of the model by its users." (Panos Alexopoulos, "Semantic Modeling for Data", 2020)

"Computer hardware, software and procedures that - 1) are reasonably secure from intrusion and misuse; 2) provide a reasonable level of availability, reliability, and correct operation; 3) are reasonably suited to performing their intended functions; and 4) adhere to generally accepted security procedures." (NIST SP 800-12 Rev. 1)

"Worthy of being trusted to fulfill whatever critical requirements may be needed for a particular component, subsystem, system, network, application, mission, enterprise, or other entity. Note From a privacy perspective, a trustworthy system is a system that meets specific privacy requirements in addition to meeting other critical requirements." (NISTIR 8062)

"The degree to which an information system (including the information technology components that are used to build the system) can be expected to preserve the confidentiality, integrity, and availability of the information being processed, stored, or transmitted by the system across the full range of threats. A trustworthy information system is a system that is believed to be capable of operating within defined levels of risk despite the environmental disruptions, human errors, structural failures, and purposeful attacks that are expected to occur in its environment of operation." (NIST SP 800-53 Rev. 4)

"The degree to which the security behavior of a component is demonstrably compliant with its stated functionality." (NIST SP 800-160)

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.