Showing posts with label Quality Assurance. Show all posts
Showing posts with label Quality Assurance. Show all posts

11 May 2019

#️⃣Software Engineering: Programming (Part XI: The Dark Side)

Software Engineering
Software Engineering Series

As member of programmers' extended community, it’s hard to accept some of the views that inconsiderate programmers and their work. In some contexts, maybe the critics reveal some truths. It’s in human nature to generalize some of the bad experiences people have or to oversimplify some of programmers’ traits in stereotypes, however the generalizations and simplifications with pejorative connotations bring no service to the group criticized, as well to the critics.

The programmer finds himself at the end of the chain of command, and he’s therefore the easiest to blame for the problems existing in software development (SD). Some of the reasoning fallacies are equating the process of programming with programmers' capabilities, when the problems reside in the organization itself – the way it handles each step of the processes involved, the way it manages projects, the way it’s organized, the way it addresses cultural challenges, etc.

The meaningful part of the SD starts with requirements’ elicitation, the process of researching and discovering the requirements based on which a piece of software is built upon. The results of the programming process are as good as the inputs provided – the level of detail, accuracy and completeness with which the requirements were defined. It’s the known GIGO (garbage in, garbage out) principle. Even if he questions some of the requirements, for example, when they are contradictory or incomplete, each question adds more delays in the process because getting clarifying the open issues involves often several iterations. Thus, one must choose between being on time and delivering the expected quality. Another problem is that the pay-off and perception for the two is different from managerial and customers’ perspective.

A programmer’s work, the piece of software he developed, it’s seen late in the process, when it’s maybe too late to change something in utile time. This happens especially in waterfall methodology, this aspect being addressed by more modern technologies by involving the customers and getting constructive feedback early in the process, and by developing the software in iterations.

Being at the end of the chain command, programming is seen often as a low endeavor, minimizing its importance, maybe because it seems so obvious. Some even consider that anybody can program, and it’s true that, as each activity, anyone can learn to program, same as anyone can learn another craft, however as any craft it takes time and skills to master. The simple act of programming doesn’t make one a programmer, same as the act of singing doesn’t make one a singer. A programmer needs on average several years to achieve an acceptable level of mastery and profoundness. This can be done only by mastering one or more programming languages and frameworks, getting a good understanding of the SD processes and what the customers want, getting hand-on experience on a range of projects that allow programmers to learn and grow.

There are also affirmations that contain some degrees of truth. Overconfidence in one’s skills results in programmers not testing adequately their own work. Programmers attempt using the minimum of effort in achieving a task, the development environments and frameworks, the methodologies and other tools playing an important part. In extremis, through the hobbies, philosophies, behaviors and quirks they have, not necessarily good or bad, the programmers seem to isolate themselves.

In the end the various misconceptions about programmers have influence only to the degree they can pervade a community or an organization’s culture. The bottom line is, as Bjarne Stroustrup formulated it, “an organization that treats its programmers as morons will soon have programmers that are willing and able to act like morons only” [1].



References:
[1] "The C++ Programming Language" 2nd Ed., by Bjarne Stroustrup, 1991

29 April 2019

🗄️Data Management: Data Integration (Part I: From Disintegration to Integration)

Data Management
Data Management Series

No matter how tight the integration between the various systems or processes there will be always gaps that need to be addressed in one way or another. The problems are in general caused by design errors rooted in the complexity of the logic from the integration layer or from the systems integrated. The errors can range from missing or incorrect validation rules, mappings and parameters to data quality issues.

A unidirectional integration involves distributing data from one system (aka publisher) to one or more systems (aka subscribers), while in bidirectional integrations systems can act as publishers and subscribers, resulting thus complex data flows with multiple endpoints. In simplest integrations the records flow one-to-one between systems, though more complex scenarios can involve logic based on business rules, mappings and other type of transformations. The challenge is to reflect the states as needed by the system with minimal involvement from the users.

Typically, it falls in application/process owners or key users’ responsibilities to make sure that the integration works smoothly. When the integration makes use of interface or staging tables they can be used as starting point for the troubleshooting, however even then the troubleshooting can be troublesome and involve a considerable manual effort. When possible the data can be exported manually from the various systems and matched in Excel or similar solutions. This leads often to personal or departmental solutions hard to maintain, control and support.

A better approach is to automatize the process by importing the data from the integrated systems at regular points in time into the same database (much like in a data warehouse), model the entities and the needed logic in there, and report the differences. Even if this approach involves a small investment in the beginning and some optimization in logic or performance over time, it can become a useful tool for troubleshooting the differences. Such solutions can be used successfully in multiple integration scenarios (e.g. web shop or ERP integrations).

A set of reports for each entity can help identify the differences between the various entities. Starting from the reported differences the users can identify, categorize and devise specific countermeasures for the various issues. The best time to have such a solution is shortly before or during UAT. This would allow to make sure that the integration layer really works, and helps correcting the issues as long they still have a small impact on the systems. Some integration issues might even lead to a postponement of the Go-Live. The second best time is during the time the first important issues were found, as the issues can be used as support for a Business Case for implementing this type of solutions.

In general, it’s recommended to fix the problems in the integration layer and use the reports only for troubleshooting and for assuring that the integration runs smoothly. There are however situations in which the integration problems can’t be fixed without creating more issues. It’s the case in which multiple systems are involved and integrated over an integration bus.

One extreme approach, not advisable though, is to build a second integration to correct the issues of the first. This solution might work in theory however there’s the risk of multiplying the issues is really high and the complexity of troubleshooting increases with the degree of dependency between the two integrations. It would be more advisable to rebuild the integration anew, however also this approach has its advantages and disadvantages.

Bottom line is that integration issues should be addressed while they are small and that an automated solution for comparing the data can help in the process

12 March 2019

🧭Business Intelligence: Enterprise Reporting (Part XII: Reports’ Lifecycle)

Business Intelligence

Introduction

A report’s lifecycle is the sequence of stages through which a report goes during the timespan of its ownership. The main stages resume mainly to report’s definition, development, testing and deployment, however a report’s life occurs within the context of IT processes like Change, Incident/Problem, Access, Availability, Information Security and Knowledge Management. To them can add up Data Management processes like Data Governance, Data Quality and Metadata Management. Therefore, the extended reports’ lifecycle could take the following form:


The processes can be easily tailored to an organization’s needs, even if it may take several attempts until the best mix is found. The activities introduced by the supporting processes don’t necessarily change the way reports are developed as long the processes integrate smoothing in report’s authoring.

Definition Phase

The lifecycle of a report starts with a series of steps that lead to report’s definition and the requirements associated with it:



The starting point is the identification of a need for data. It can be a business question that needs to be answered, a decision that needs to be made, data needed to keep an operational, tactical or strategical objective under control, and so on. Such business situations can be referred simple as (business) problems.
Problem definition
Problem definition (statement) is the process by which a business issue or need is clearly and concisely stated. This step might seem trivial and implied, however in praxis correlated to it lies the most important volume of overwork.

The dictum “a problem well stated is a problem half-solved” applies as well in BI field. Unfortunately, there are cases in which the users want something else than stated or they leave important details out. Sometimes the users aren’t sure what they need/want, and it comes in developer’s attributions to help clarify the problem and put it within a context.

There are cases in which the users just request a report without specifying the problem they need to solve. This might do when the user has a good understanding of the data and the problem, however this approach does not always work. Personally, I find it useful to define for each report also the underneath problem. I see it as a “win-win” situation in which the user invests some knowledge into the developer and thus the developer will better understand the business, while in time he can provide better help. A thorough understanding of the business and knowledge of the users and their needs can help minimize the volume of overwork involved in reports’ development.
Requirements definition
Requirements definition is the process by which functional and non-functional expectations, targets and specifications are elicited and documented.

Functional requirements specify what the report must do - how the report is structured or formatted, how data need to be visualized or navigated, to what file formats need to be exported, on whether needs to be printed, how the data needs to be grouped, in which order, in what currency/language needs to be displayed, what data sources need to be used, etc. The functional requirements are typically listed in the use case and test script.

Non-functional requirements refer to requirements related to report’s accessibility, availability, performance, compliance, documentation, quality, maintainability, security or testability.

The degree to which a requirement can be fulfilled depends entirely on the reporting platform. It can be differentiated between soft and hard constraints. Soft constraints can be overcome by adding more processing power, memory or other types of resources, while hard constraints can’t be easily or at all overcome. Of course, not all requirements are equally important. Important not fulfilled requirements can make a report unusable and, in extremis, can lead to choosing one reporting platform over another.

The requirements can be elicited by a developer, an analyst/consultant or defined by the business itself. Organizations can simplify the process by defining a set of guidelines and standards that need to be considered in reports’ definition. Normally, is enough to reference the document(s) where the guidelines and standards are found. In contrast to other software artifacts, the requirements for reports can be gather in a simplified version of a document. Quite often a checklist can help identify these requirements upfront with a minimum of overhead.
Report definition
Report definition is the process by which report’s content, logic and layout are explicitly defined - what attributes are needed for output and from what source, what static/dynamic parameters are needed, how the data need to be displayed/formatted, what formulas, aggregations or ordering apply.

A report’s definition can be anything between a simple statement summarizing what the report is about and complex structures (mainly in form of a mapping) reflecting in detail each attribute, constraint, formula, grouping or sorting.

A good definition should allow a developer to create the report as needed by the users, eventually with minimal deviations implied by user’s understanding. The holy grail in report’s definition is finding a structure flexible enough to cover all the aspects of a report. Even if some structures allow such flexibility, sometimes it’s almost impossible not provide additional descriptions in textual forms. The less insight the developer has into the business, the more textual descriptions and visuals are needed to be included to support the knowledge gap.
GAP Analysis
GAP Analysis is the iterative process by which the current state of a software artifact or situation is compared with the potential or desired state. It became an integrant tool from professionals’ thinking to the extent its role as separate process is quite often ignored. In the context of reporting authoring it can be used when comparing the requirements against the current infrastructure and the data available, as well while comparing the developed report against the requirements.

It can happen that the technical and data constraints don’t allow building the report as needed by the users. The differences need to be mitigated and eventually the requirements need to be changed to accommodate the reality. In extremis must be considered whether the report still make sense in the light of the modified requirements.
Solution formulation
Solution formulation is the process by which a formal (technical) solution is defined for the given requirements. It’s a conceptualization (aka concept) of the requirements, and in many cases it’s just a short description by which means the report will be build and what data sources will be used. In more complex cases it can include details about the changes needed in the infrastructure to support the report (e.g. creation/extensions of tables and other database objects, ETL jobs, components, etc.), about the data that need to be collected, etc.

Of course, the conceptualization must be considered together with report’s definition. In fact, report’s definition can be considered as part of the conceptualization. A conceptualization can cover multiple reports, as well two or more different solutions can be provided for different sets of reports. The infrastructure can make a concept futile, either when there is a single reporting platform, or when clear rules are in place.
Prototyping
Prototyping is the iterative process of building a simplified version of the report for demonstration and evaluation purposes, so that users can better define the requirements or to prove the concept. The prototype is a preliminary version that can be refined successively until user’s requirements have a final form. It can take the form of a mock-up query to verify report’s technical and logical feasibility, and/or an Excel layout to depict how the report will look like. Prototypes can facilitate the communication between the parties involved and can be considered as part of the requirements.

A prototype might be needed 1 from 5 cases or so, however this number depends also on the number of queries available or of the knowledge of the source and business processes. Because a prototype can involve additional work, it’s important to identify those cases in which a prototype makes sense and keep the effort to a minimum, especially when an approval is involved in the process. Therefore, one should consider the most important characteristics that need to be proved (e.g. if the data can be aggregated, matched, displayed at the requested level of detail, or in the requested format).

With the help of self-service tools, the business has the capabilities to play with the data and find answers by itself, being able thus to create a prototyped version of the report. Once the report met business needs it can be standardized so it can be used organization-wide. It’s recommended to standardize the reports that are used as part of organization’s processes, otherwise self-service can become a bottleneck for the organization.
Change Management
Change Management is the process of ensuring that the changes performed to a system, in this case a BI tool or the whole BI infrastructure, are performed with minimal disruption for the business and that risks are kept under control. Changes can be requested via standard requests or change requests. A standard request (SR) is a pre-approved change that involves low risks, is relatively common and follows a predefined procedure. In contrast to SRs, a change request (CR) requires the authorization of a board, e.g. the Change Advisory Board (CAB), it often involves risks, an investment and the approach is not that common.

Both are hard-copy or electronic templates that allow to capture information about the changes and allow to document the change and track its status. They include typically the problem definition together with users’ requirements, report definition and the formulation of the solution. What differentiates them thus is the approval process that can be sometimes time-consuming, and the volume of formalism needed to manage the requests (e.g. tracking status, writing status reports, handling risks, etc.).

Unless infrastructural changes are necessary, the risks involved with the creation of reports are relatively small, especially when the reports are developed in-house. Reports developed by vendors involve more risks and imply investments that in a form or other need to be approved. Considering the particularities of the two approaches, personally I think that reports that can be developed with internal resources should be done via SRs, while reports developed externally should be done via CRs. Even if this categorization has the potential of creating some confusion, the use of SRs allows reducing the volume of effort necessary to manage the requests. I suppose there can be found solutions to request external changes via SRs as well (e.g. by using contingents and a set of well-defined rules).

08 December 2007

🏗️Software Engineering: Testing (Just the Quotes)

"Program testing can be used to show the presence of bugs, but never to show their absence!" (Edsger W Dijkstra, "Notes On Structured Programming", 1970)

"Test input for validity and plausibility. [...] Make sure input cannot violate the limits of the program. [...] Identify bad input; recover if possible. [...] Test programs at their boundary values." (Brian W Kernighan & Phillip J Plauger, "The Elements of Programming Style", 1974)

"Watch out for off-by-one errors. A common cause of off-by-one errors is an incorrect test, for example using "greater than" when "greater than or equal to" is actually needed. This program is a binary search routine, which looks for a particular element in a table by halving the interval in which the element might lie, until it ultimately either finds it, or deduces that it isn't present." (Brian W Kernighan & Phillip J Plauger, "The Elements of Programming Style", 1974)

"Write and test a big program in small pieces." (Brian W Kernighan & Phillip J Plauger, "The Elements of Programming Style", 1974)

"It is a fundamental principle of testing that you must know in advance the answer each test case is supposed to produce. If you don't, you are not testing; you are experimenting." (Brian Kernighan, "Software Tools in Pascal", 1981)

"The essence of a software entity is a construct of interlocking concepts: […] I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation." (Fred Brooks, "No Silver Bullet", 1986)

"Object-oriented programming languages support encapsulation, thereby improving the ability of software to be reused, refined, tested, maintained, and extended. The full benefit of this support can only be realized if encapsulation is maximized during the design process. […] design practices which take a data-driven approach fail to maximize encapsulation because they focus too quickly on the implementation of objects." (Rebecca Wirfs-Brock, "Object-oriented Design: A. responsibility-driven approach", 1989)

"A design remedy that prevents bugs is always preferable to a test method that discovers them." (Boris Beizer, "Software Testing Techniques", 1990)

"A test that reveals a bug has succeeded, not failed." (Boris Beizer, "Software Testing Techniques", 1990)

"More than the act of testing, the act of designing tests is one of the best bug preventers known. The thinking that must be done to create a useful test can discover and eliminate bugs before they are coded - indeed, test-design thinking can discover and eliminate bugs at every stage in the creation of software, from conception to specification, to design, coding and the rest."  (Boris Beizer, "Software Testing Techniques", 1990)

"Programmers are responsible for software quality - quality in their own work, quality in the products that incorporate their work, and quality at the interfaces between components. Quality has never been and will never be tested in. The responsibility is both moral and professional." (Boris Beizer, "Software Testing Techniques", 1990)

"A problem with this 'waterfall' approach is that there will then be no user interface to test with real users until this last possible moment, since the 'intermediate work products' do not explicitly separate out the user interface in a prototype with which users can interact. Experience also shows that it is not possible to involve the users in the design process by showing them abstract specifications documents, since they will not understand them nearly as well as concrete prototypes." (Jakob Nielsen, "Usability Engineering", 1993)

"Testing by itself does not improve software quality. Test results are an indicator of quality, but in and of themselves, they don't improve it. Trying to improve software quality by increasing the amount of testing is like trying to lose weight by weighing yourself more often. What you eat before you step onto the scale determines how much you will weigh, and the software development techniques you use determine how many errors testing will find. If you want to lose weight, don't buy a new scale; change your diet. If you want to improve your software, don't test more; develop better." (Steve C McConnell, "Code Complete: A Practical Handbook of Software Construction", 1993)

"The entire idea behind prototyping is to save on the time and cost to develop something that can be tested with real users. These savings can only be achieved by somehow reducing the prototype compared with the full system: either cutting down on the number of features in the prototype or reducing the level of functionality of the features such that they seem to work but do not actually do anything." (Jakob Nielsen, "Usability Engineering", 1993)

"The real value of tests is not that they detect bugs in the code, but that they detect inadequacies in the methods, concentration, and skills of those who design and produce the code." (Charles A R Hoare, "How Did Software Get So Reliable Without Proof?", Lecture Notes in Computer Science Vol. 1051, 1996)

"The longer we wait between integrations and acceptance tests, the worse things get. Wait twice as long and we'll have four or more times the hassle. The reason is that one bug written just yesterday is pretty easy to find, while ten or a hundred written weeks ago can become almost impossible." (Ron Jeffries, "Extreme Programming Installed", 2001)

"Unit tests can be tedious to write, but they save you time in the future (by catching bugs after changes). Less obviously, but just as important, is that they can save you time now: tests focus your design and implementation on simplicity, they support refactoring, and they validate features as you develop." (Ron Jeffries, "Extreme Programming Installed, 2001)

"People also underestimate the time they spend debugging. They underestimate how much time they can spend chasing a long bug. With testing, I know straight away when I added a bug. That lets me fix the bug immediately, before it can crawl off and hide. There are few things more frustrating or time wasting than debugging. Wouldn't it be a hell of a lot quicker if we just didn't create the bugs in the first place?" (Martin Fowler, 2002)

"A system that is comprehensively tested and passes all of its tests all of the time is a testable system. That’s an obvious statement, but an important one. Systems that aren’t testable aren’t verifiable. Arguably, a system that cannot be verified should never be deployed." (Robert C Martin, "Clean Code: A Handbook of Agile Software Craftsmanship", 2008)

"Features have a specification cost, a design cost, and a development cost. There is a testing cost and a reliability cost. […] Features have a documentation cost. Every feature adds pages to the manual increasing training costs." (Douglas Crockford, "JavaScript: The Good Parts: The Good Parts", 2008)

"If the discipline of requirements specification has taught us anything, it is that well-specified requirements are as formal as code and can act as executable tests of that code!"  (Robert C Martin, "Clean Code: A Handbook of Agile Software Craftsmanship", 2008)

"It is a myth that we can get systems 'right the first time'. Instead, we should implement only today’s stories, then refactor and expand the system to implement new stories tomorrow. This is the essence of iterative and incremental agility. Test-driven development, refactoring, and the clean code they produce make this work at the code level." (Robert C Martin, "Clean Code: A Handbook of Agile Software Craftsmanship", 2008) 

"It is unit tests that keep our code flexible, maintainable, and reusable. The reason is simple. If you have tests, you do not fear making changes to the code! Without tests every change is a possible bug."  (Robert C Martin, "Clean Code: A Handbook of Agile Software Craftsmanship", 2008)

"It turns out that strong typing does not eliminate the need for careful testing. And I have found in my work that the sorts of errors that strong type checking finds are no the errors I worry about." (Douglas Crockford, "JavaScript: The Good Parts", 2008)

"Acceptance testing relies on the ability to execute automated tests in a productionlike environment. However, a vital property of such a test environment is that it is able to successfully support automated testing. Automated acceptance testing is not the same as user acceptance testing. One of the differences is that automated acceptance tests should not run in an environment that includes integration to all external systems. Instead, your acceptance testing should be focused on providing a controllable environment in which the system under test can be run. 'Controllable' in this context means that you are able to create the correct initial state for our tests. Integrating with real external systems removes our ability to do this." (David Farley & Jez Humble, "Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation", 2010)

"[…] beautiful code is simple code. Each individual part is kept simple with simple responsibilities and simple relationships with the other parts of the system. This is the way we can keep our systems maintainable over time, with clean, simple, testable code, ensuring a high speed of development throughout the lifetime of the system. Beauty is born of and found in simplicity." (Jørn Ølmheim [in Kevlin Henney’s "97 Things Every Programmer Should Know", 2010])

"Many processes in software development are repetitive and easily automated. The DRY principle applies in these contexts, as well as in the source code of the application. Manual testing is slow, error-prone, and difficult to repeat, so automated test suites should be used where possible. Integrating software can be time consuming and error-prone if done manually, so a build process should be run as frequently as possible, ideally with every check-in. Wherever painful manual processes exist that can be automated, they should be automated and standardized. The goal is to ensure that there is only one way of accomplishing the task, and it is as painless as possible." (Steve Smith, [in Kevlin Henney’s "97 Things Every Programmer Should Know", 2010])

"In many applications, integration or functional tests are used by default as the standard way to test almost all aspects of the system. However integration and functional tests are not the best way to detect and identify bugs. Because of the large number of components involved in a typical end-to-end test, it can be very hard to know where something has gone wrong. In addition, with so many moving parts, it is extremely difficult, if not completely unfeasible, to cover all of the possible paths through the application." (John F Smart, "Jenkins: The Definitive Guide", 2011)

"Systems with high risks must be tested more thoroughly than systems that do not generate big losses if they fail. The risk assessment must be done for the individual system parts, or even for single error possibilities. If there is a high risk for failures by a system or subsystem, there must be a greater testing effort than for less critical (sub)systems. International standards for production of safety-critical systems use this approach to require that different test techniques be applied for software of different integrity levels." (Andreas Spillner et al, "Software Testing Foundations: A Study Guide for the Certified Tester Exam" 4th Ed., 2014)

"But perhaps the biggest problem is that the longer you spend working on something - whether it’s a prototype or a real product - the more attached you’ll become, and the less likely you’ll be to take negative test results to heart. After one day, you’re receptive to feedback. After three months, you’re committed." (Jake Knapp et al, "Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days", 2016)

"Sometimes you can’t fit everything in. Remember that the sprint is great for testing risky solutions that might have a huge payoff. So you’ll have to reverse the way you would normally prioritize. If a small fix is so good and low-risk that you’re already planning to build it next week, then seeing it in a prototype won’t teach you much. Skip those easy wins in favor of big, bold bets." (Jake Knapp et al, "Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days", 2016)

"Automated testing is a safety net that protects the program from its programmers." (Yegor Bugayenko, "Code Ahead", 2018)

"Quality is a product of a conflict between programmers and testers." (Yegor Bugayenko, "Code Ahead", 2018)

"Quality must be enforced, otherwise it won't happen. We programmers must be required to write tests, otherwise we won't do it." (Yegor Bugayenko, "Code Ahead", 2018)

"The job of a tester is to prove that the software is bug free, while it has to be the other way around: The job of a tester is to prove that the software is broken. The better testers are doing their jobs, the more bugs they manage to find and report." (Yegor Bugayenko, "Code Ahead", 2018)

"Code coverage can provide some insight into untested code, but it is not a substitute for thinking critically about how well your system is tested." (Titus Winters, "Software Engineering at Google: Lessons Learned from Programming Over Time", 2020)

"Fixing a bug is much like adding a new feature: the presence of the bug suggests that a case was missing from the initial test suite, and the bug fix should include that missing test case." (Titus Winters, "Software Engineering at Google: Lessons Learned from Programming Over Time", 2020)

"In addition to developing the proper culture, invest in your testing infrastructure by developing linters, documentation, or other assistance that makes it more difficult to write bad tests." (Titus Winters, "Software Engineering at Google: Lessons Learned from Programming Over Time", 2020)

"When an engineer refactors the internals of a system without modifying its interface, whether for performance, clarity, or any other reason, the system’s tests shouldn’t need to change. The role of tests in this case is to ensure that the refactoring didn’t change the system’s behavior. Tests that need to be changed during a refactoring indicate that either the change is affecting the system’s behavior and isn’t a pure refactoring, or that the tests were not written at an appropriate level of abstraction." (Titus Winters, "Software Engineering at Google: Lessons Learned from Programming Over Time", 2020)

02 December 2007

🏗️Software Engineering: Quality Assurance (Just the Quotes)

"Program testing can be used to show the presence of bugs, but never to show their absence!" (Edsger W Dijkstra, "Notes On Structured Programming", 1970)

"It is a fundamental principle of testing that you must know in advance the answer each test case is supposed to produce. If you don't, you are not testing; you are experimenting." (Brian Kernighan, "Software Tools in Pascal", 1981)

"The essence of a software entity is a construct of interlocking concepts: […] I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation." (Fred Brooks, "No Silver Bullet", 1986)

"A design remedy that prevents bugs is always preferable to a test method that discovers them." (Boris Beizer, "Software Testing Techniques", 1990)

"A test that reveals a bug has succeeded, not failed." (Boris Beizer, "Software Testing Techniques", 1990)

"More than the act of testing, the act of designing tests is one of the best bug preventers known. The thinking that must be done to create a useful test can discover and eliminate bugs before they are coded - indeed, test-design thinking can discover and eliminate bugs at every stage in the creation of software, from conception to specification, to design, coding and the rest."  (Boris Beizer, "Software Testing Techniques", 1990)

"Programmers are responsible for software quality - quality in their own work, quality in the products that incorporate their work, and quality at the interfaces between components. Quality has never been and will never be tested in. The responsibility is both moral and professional." (Boris Beizer, "Software Testing Techniques", 1990)

"Testing proves a programmer’s failure. Debugging is the programmer’s vindication." (Boris Beizer, "Software Testing Techniques", 1990)

"First law: The pesticide paradox. Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffective."  (Boris Beizer, "Software Testing Techniques", 1990)

"Second law: The complexity barrier. Software complexity (and therefore that of bugs) grows to the limits of our ability to manage that complexity." (Boris Beizer, "Software Testing Techniques", 1990)

"Software never was perfect and won't get perfect. But is that a license to create garbage? The missing ingredient is our reluctance to quantify quality." (Boris Beizer, "Software Testing Techniques", 1990)

"Third law: Code migrates to data. Because of this law there is increasing awareness that bugs in code are only half the battle and that data problems should be given equal attention."  (Boris Beizer, "Software Testing Techniques", 1990)

"The most common kind of coding bug, and often considered the least harmful, are documentation bugs (i.e., erroneous comments). Although many documentation bugs are simple spelling errors or the result of poor writing, many are actual errors - that is, misleading or erroneous comments. We can no longer afford to discount such bugs, because their consequences are as great as 'true' coding errors. Today programming labor is dominated by maintenance. This will increase as software becomes even longer-lived. Documentation bugs lead to incorrect maintenance actions and therefore cause the insertion of other bugs." (Boris Beizer, "Software Testing Techniques", 1990)

"Testing by itself does not improve software quality. Test results are an indicator of quality, but in and of themselves, they don't improve it. Trying to improve software quality by increasing the amount of testing is like trying to lose weight by weighing yourself more often. What you eat before you step onto the scale determines how much you will weigh, and the software development techniques you use determine how many errors testing will find. If you want to lose weight, don't buy a new scale; change your diet. If you want to improve your software, don't test more; develop better." (Steve C McConnell, "Code Complete: A Practical Handbook of Software Construction", 1993)

"The source code is often the only accurate description of the software. On many projects, the only documentation available to programmers is the code itself. Requirements specifications and design documents can go out of date, but the source code is always up to date. Consequently, it's imperative that the code be of the highest possible quality." (Steve C McConnell," Code Complete: A Practical Handbook of Software Construction", 1993) 

"The real value of tests is not that they detect bugs in the code, but that they detect inadequacies in the methods, concentration, and skills of those who design and produce the code." (Charles A R Hoare, "How Did Software Get So Reliable Without Proof?", Lecture Notes in Computer Science Vol. 1051, 1996)

"A system that is comprehensively tested and passes all of its tests all of the time is a testable system. That’s an obvious statement, but an important one. Systems that aren’t testable aren’t verifiable. Arguably, a system that cannot be verified should never be deployed." (Robert C Martin, "Clean Code: A Handbook of Agile Software Craftsmanship", 2008)

"It is a myth that we can get systems 'right the first time'. Instead, we should implement only today’s stories, then refactor and expand the system to implement new stories tomorrow. This is the essence of iterative and incremental agility. Test-driven development, refactoring, and the clean code they produce make this work at the code level." (Robert C Martin, "Clean Code: A Handbook of Agile Software Craftsmanship", 2008)

"It is unit tests that keep our code flexible, maintainable, and reusable. The reason is simple. If you have tests, you do not fear making changes to the code! Without tests every change is a possible bug."  (Robert C Martin, "Clean Code: A Handbook of Agile Software Craftsmanship", 2008)

30 July 2007

🌁Software Engineering: Black-Box Testing (Definitions)

"A specification-based test that looks at a system or unit exclusively from the outside, that is, over its public interface." (Johannes Link & Peter Fröhlich, "Unit Testing in Java", 2003)

"This test compares the externally observable behavior at the external software interfaces (without knowledge of their structure) with the desired behavior. Black-Box tests are frequently equated with »functional tests«, although they can of course also include non-functional tests." (Lars Dittmann et al, "Automotive SPICE in Practice", 2008)

"Repeatable procedure to derive and/or select test cases based on an analysis of the specification, either functional or nonfunctional, of a component or system without reference to its internal structure." (Tilo Linz et al, "Software Testing Foundations" 4th Ed., 2014)

"A software testing methodology that looks at available inputs for an application and the expected outputs from each input." (Mike Harwood, "Internet Security: How to Defend Against Attackers on the Web" 2nd Ed., 2015)

[Data coverage (black-box) testing:] "Testing a program or subprogram based on the possible input values, treating the code as a black box" (Nell Dale & John Lewis, "Computer Science Illuminated" 6th Ed., 2015)

[black-box test design technique:] "Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure." (Software Quality Assurance)

"Testing, either functional or non-functional, without reference to the internal structure of the component or system." (Software Quality Assurance)

16 July 2007

🌁Software Engineering: White-Box Test/Testing (Definitions)

"An implementation-based test, in contrast to a specification-based test" (Johannes Link & Peter Fröhlich, "Unit Testing in Java", 2003)

"This test is derived knowing the inner structure of the software and based on the program code, design, interface descriptions, and so on. White-box tests are also called 'structure based tests'." (Lars Dittmann et al, "Automotive SPICE in Practice", 2008)

"Any technique used to derive and/or select test cases based on an analysis of the internal structure of the test object." (Tilo Linz et al, "Software Testing Foundations" 4th Ed., 2014)

"This kind of testing requires you to look at the code and see how it works, so you can test individual blocks and choices within the code." (Matt Telles, "Beginning Programming", 2014)

"White box test design technique in which the test cases are designed using the internal structure of the test object. Completeness of such a test is judged using coverage of structural elements (for example, branches, paths, data). General term for control- or data-flow-based test." (Tilo Linz et al, "Software Testing Foundations", 4th Ed., 2014)

"A software testing methodology that examines the code of an application. This contrasts with black box testing, which focuses only on inputs and outputs of an application." (Mike Harwood, "Internet Security: How to Defend Against Attackers on the Web" 2nd Ed., 2015)

"A test designed by someone who knows how the code works internally. That person can guess where problems may lie and create tests specifically to look for those problems." (Rod Stephens, "Beginning Software Engineering", 2015)

"Procedure to derive and select test cases based on an analysis of the internal structure of a component or system." (Standard Glossary, "ISTQB", 2015)

"Testing based on an analysis of the internal structure of the component or system. " (Standard Glossary, "ISTQB", 2015)

🌁Software Engineering: Regression Testing (Defintiions)

"A test that exercises the entire application to verify that a new piece of code didn’t break anything." (Rod Stephens, "Beginning Software Engineering", 2015)

[regression test suite:] "A collection of tests that are run against a system on a regular basis to validate that it works according to the tests." (Pramod J Sadalage & Scott W Ambler, "Refactoring Databases: Evolutionary Database Design", 2006)

"Selective retesting of a modified system or component to verify that faults have not been introduced or exposed as a result of the changes, and that the modified system or component still meets its requirements." (Richard D Stutzke, "Estimating Software-Intensive Systems: Projects, Products, and Processes", 2005)

"Testing to verify that previously successfully tested features are still correct. It is necessary after modifications to eliminate undesired side effects." (Lars Dittmann et al, "Automotive SPICE in Practice", 2008)

"Testing a program to see if recent changes to the code have broken any existing features." (Rod Stephens, "Start Here!™ Fundamentals of Microsoft® .NET Programming", 2011)

"Testing a previously tested program or a partial functionality following modification to show that defects have not been introduced or uncovered in unchanged areas of the software as a result of the changes made. It is performed when the software or its environment is changed." (Tilo Linz et al, "Software Testing Foundations" 4th Ed., 2014)

"A software testing method that checks for additional errors in software that may have been introduced in the process of upgrading or patching to fix other problems." (Mike Harwood, "Internet Security: How to Defend Against Attackers on the Web" 2nd Ed., 2015)

🌁 Software Engineering: Stress Testing (Definitions)

"A test for the computer system to simulate real environment with real volume of data before moving it to production." (Timothy J  Kloppenborg et al, "Project Leadership", 2003)

"Testing that evaluates a system or component at or beyond its specified performance limits." (Richard D Stutzke, "Estimating Software-Intensive Systems: Projects, Products, and Processes", 2005)

"A form of simulation modeling that focuses specifically on identifying the response of a model under specific, often highly negative scenarios. Common examples include testing the profitability of a bank given catastrophic levels of mortgage defaults or modeling extreme macroeconomic conditions." (Evan Stubbs, "Delivering Business Analytics: Practical Guidelines for Best Practice", 2013)

"Test of system behavior with overload. For example, running it with too high data volumes, too many parallel users, or wrong usage." (Tilo Linz et al, "Software Testing Foundations, 4th Ed", 2014)

"Stress testing assesses the potential outcome of specific changes that are fundamental, material, and adverse." (Christopher Donohue et al, "Foundations of Financial Risk: An Overview of Financial Risk and Risk-based Financial Regulation" 2nd Ed., 2015)

"A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified workloads, or with reduced availability of resources such as access to memory or servers." (IEEE 610)

15 July 2007

🌁Software Engineering: Six Sigma (Definitions)

"A statistical term meaning six standard deviations from the norm. Used as the name for a quality improvement program that aims at reducing errors to one in a million." (Judith Hurwitz et al, "Service Oriented Architecture For Dummies" 2nd Ed., 2009)

"1.Generally, a rigorous and disciplined statistical analysis methodology to measure and improve a company’s operational performance, practices and systems. 2.In many organizations, simply a measure of quality near perfection. 3.In data quality, a level of quality in which six standard deviations of a population fall within the upper and lower control limits of quality, allowing no more than 3.4 defects per million parts or transactions." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"A methodology to manage process variations that cause defects, defined as unacceptable deviation from the mean or target, and to systematically work toward managing variation to prevent those defects." (Linda Volonino & Efraim Turban, "Information Technology for Management" 8th Ed., 2011)

"A systematic quality improvement process used on both the production and transactional sides of the business to design, manufacture, and market goods and services that customers may desire to purchase." (Joan C Dessinger, "Fundamentals of Performance Improvement" 3rd Ed., 2012)

"A highly structured approach for eliminating defects in any process, whether from manufacturing or transactional processes. It can be applied to a product or a service–oriented process in any organization. Further, Six Sigma is 'a statistical term that measures how far a given process deviates from perfection'. The goal of Six Sigma is to systematically measure and eliminate defects in a process, aiming for a level of less than 3.4 defects per million instances or 'opportunities'." (Robert F Smallwood, "Managing Electronic Records: Methods, Best Practices, and Technologies", 2013)

"A business management strategy originally developed by Motorola in the 1980s. It is essentially a business problem-solving methodology that supports process improvements through an understanding of customer needs, identification of causes of quality variations, and disciplined use of data and statistical analysis." (Sally-Anne Pitt, "Internal Audit Quality", 2014)

"An approach from the production environment for managing quality that targets a mere 3.4 errors per million instances as its performance goal." (Boris Otto & Hubert Österle, "Corporate Data Quality", 2015)

"A disciplined approach to enterprise-wide quality improvement and variation reduction. Technically, it is the denominator of the capability (Cp) index." (Clyde M Creveling, "Six Sigma for Technical Processes", 2006)

"A business management strategy focusing on quality control testing and optimizing processes through reducing process variance." (Evan Stubbs, "Big Data, Big Innovation", 2014)

🌁Software Engineering: Penetration Testing (Definition)

"A method for assessing information systems in an attempt to bypass controls and gain access." (Weiss, "Auditing IT Infrastructures for Compliance" 2nd Ed., 2015)

"An attempt to circumvent various layers of a system or application’s security controls for the purpose of seeing how far into the system the attacker can get." (Mike Harwood, "Internet Security: How to Defend Against Attackers on the Web" 2nd Ed., 2015)

"A method of evaluating the security of a computer system or network by simulating an attack that a malicious hacker would carry out. This is done so that vulnerabilities and weaknesses can be uncovered." (Shon Harris & Fernando Maymi, "CISSP All-in-One Exam Guide" 8th Ed., 2018)

"Security testing in which evaluators mimic real-world attacks in an attempt to identify ways to circumvent the security features of an application, a system, or a network." (William Stallings, "Effective Cybersecurity: A Guide to Using Best Practices and Standards", 2018)

"The portion of security testing in which evaluators attempt to circumvent the security features of a system. The evaluators may be assumed to use all system design and implementation documentation and may include listings of system source code, manuals, and circuit diagrams. The evaluators work under the same constraints applied to ordinary users." (Mark S Merkow & Lakshmikanth Raghavan, "Secure and Resilient Software Development", 2010)

"The specialized testing of a system to determine if it is possible to defeat its security controls." (O Sami Saydjari, "Engineering Trustworthy Systems: Get Cybersecurity Design Right the First Time", 2018)

🌁Software Engineering: Peer Review (Definitions)

"The review of work products performed by peers during development of the work products to identify defects for removal. The term peer review is used in the CMMI Product Suite instead of the term work product inspection. Essentially, these terms mean the same thing." (Sandy Shrum et al, "CMMI: Guidelines for Process Integration and Product Improvement", 2003)

"A formal review of a complete work product performed by a group to identify defects for removal, and to collect metrics. See also inspection." (Richard D Stutzke, "Estimating Software-Intensive Systems: Projects, Products, and Processes", 2005)

"A system using reviewers who are professional equals; a process used for checking the work performed by one’s equals (peers) to ensure that it meets specific criteria." (Mark S Merkow & Lakshmikanth Raghavan, "Secure and Resilient Software Development", 2010)

"A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough." (IQBBA, "Standard glossary of terms used in Software Engineering", 2011)

"In relation to internal audit, a form of external assessment that involves a minimum of three organizations assessing each other's internal audit functions using a round-robin approach (A reviews B who reviews C who reviews A)." (Sally-Anne Pitt, "Internal Audit Quality", 2014)


06 July 2007

🌁Software Engineering: Inspection (Definitions)

"Visual examination of work products to detect errors, violations of development standards, and other problems. See also peer review and static analysis." (Richard D Stutzke, "Estimating Software-Intensive Systems: Projects, Products, and Processes", 2005)

"The process of examining a component, subassembly, subsystem, or product for off-target performance, variability, and defects during either product development or manufacturing. The focus is typically on whether the item under inspection is within the allowable tolerances. As with all processes, inspection itself is subject to variability, and out-of-spec parts or functions might pass inspection inadvertently." (Clyde M Creveling, "Six Sigma for Technical Processes: An Overview for R Executives, Technical Leaders, and Engineering Managers", 2006)

"Examining or measuring to verify whether an activity, component, product, result, or service conforms to specified requirements. " (For Dummies, "PMP Certification All-in-One For Dummies" 2nd Ed., 2013)

"A verification method in which one member of a team reads the program or design aloud line by line and the others point out errors" (Nell Dale & John Lewis, "Computer Science Illuminated" 6th Ed., 2015)

"Examination of a work product to determine whether it conforms to documented standards." (Project Management Institute, "A Guide to the Project Management Body of Knowledge (PMBOK® Guide )", 2017)

"A type of peer review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a documented procedure." (IEEE 610, 1028)

🌁Software Engineering: Quality Control (Definitions)

"The operational techniques and activities that are used to fulfill requirements for quality." (Sandy Shrum et al, "CMMI: Guidelines for Process Integration and Product Improvement" 2nd Ed., 2006)

"Monitoring project performance for quality and identifying sources of unsatisfactory quality measures." (Bonnie Biafore, "Successful Project Management", 2011)

"Procedures and methods for measuring process quality, identifying unacceptable performance, variance and taking corrective action." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"A set of activities that measure, evaluate, and report on the quality of software project artifacts throughout the project life cycle." (Project Management Institute, "Software Extension to the PMBOK® Guide 5th Ed", 2013)

"Review of all elements of development and production, often reliant on inspection." (Sally-Anne Pitt, "Internal Audit Quality", 2014)

"The practice of testing or measuring and recording results at checkpoints to assess performance and ensure that the project performance meets the standard within appropriate parameters." (Bonnie Biafore & Teresa Stover, "Your Project Management Coach", 2012)

"The operational techniques and activities, part of quality management, that are focused on fulfilling quality requirements." (ISO 8402)

05 July 2007

🌁Software Engineering: Testing (Definitions)

"The process of operating a system or component under controlled conditions to collect measurements needed to determine if the system or component meets its allocated requirements. See also dynamic analysis." (Richard D Stutzke, "Estimating Software-Intensive Systems: Projects, Products, and Processes", 2005)

"Activity to verify if an object conforms with its requirements and to detect deviations." (Lars Dittmann et al, "Automotive SPICE in Practice", 2008)

"The application of test cases against a build to ensure that a system performs correctly in those cases." (Bruce P Douglass, "Real-Time Agility: The Harmony/ESW Method for Real-Time and Embedded Systems Development", 2009)

"Generally, a validation process that compares in an organized fashion the functionality or content of a thing or process against pre-established requirements for that thing or process." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"A set of one or more test cases test automation 1. The use of software tools to design or program test cases with the goal to be able to execute them repeatedly using the computer. 2. To support any test activities by using software tools." (Tilo Linz et al, "Software Testing Foundations, 4th Ed", 2014)

"The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects." (Tilo Linz et al, "Software Testing Foundations, 4th Ed", 2014)

"Verifying that a program does what it is supposed to do - and doesn’t do what it is not supposed to do." (Matt Telles, "Beginning Programming", 2014)

"Software testing provides the mechanism for verifying that the requirements identified during the initial phases of the project were properly implemented and that the system performs as expected. The test scenarios developed through these competitions ensure that the requirements are met end-to-end." (Kamalendu Pal & Bill Karakostas, "Software Testing Under Agile, Scrum, and DevOps", 2021)

"A set or one of more test cases" (IEEE 829)

"Activity that verifies that a CI, service or process meets its specifications or agreed requirements" (ITIL)

16 June 2007

🌁Software Engineering: Design Review (Definitions)

"A formal, documented, comprehensive, and systematic examination of a design to evaluate the design requirements and the capability of the design to meet these requirements, and to identify problems and propose solutions." (Sandy Shrum et al, "CMMI: Guidelines for Process Integration and Product Improvement", 2003)

"A process or meeting during which a system, hardware, or software design is presented to stakeholders for comment or approval." (Richard D Stutzke, "Estimating Software-Intensive Systems: Projects, Products, and Processes", 2005)

"the quality-assurance process in which all aspects of a system are reviewed publicly prior to the striking of code." (William H Inmon, "Building the Data Warehouse", 2005)

[preliminary design review (PDR):] "in a traditional waterfall process, a review of the preliminary architectural concepts, meant to ensure the validity of subsequent work. In the Harmony/ESW process, the PDR is optional but can be performed at the end of a microcycle in which the primary architectural concerns have been addressed." (Bruce P Douglass, "Real-Time Agility: The Harmony/ESW Method for Real-Time and Embedded Systems Development", 2009)

"A formal, documented, comprehensive, and systematic examination of a design to evaluate the design requirements and the capability of the design to meet these requirements, and to identify problems and propose solutions." (Mark S Merkow & Lakshmikanth Raghavan, "Secure and Resilient Software Development", 2010)

"A process where all aspects of a system design are reviewed publicly before code construction starts." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"A series of meetings wherein all aspects of the database and application code are reviewed for efficiency, effectiveness, and accuracy." (Craig S Mullins, "Database Administration: The Complete Guide to DBA Practices and Procedures" 2nd Ed., 2012)

"A series of meetings wherein all aspects of the database and application code are reviewed for efficiency, effectiveness, and accuracy." (Craig S Mullins, "Database Administration", 2012)

" the scheduled-in checkpoints for assessing the design progress towards meeting product requirements and budget." (Atila Ertas, "Transdisciplinary Engineering Design Process", 2018)

20 May 2007

🌁Software Engineering: DevOps (Definitions)

"An application delivery philosophy that stresses communication, collaboration, and integration between software developers and their information technology (IT) counterparts in operations. DevOps is a response to the interdependence of software development and IT operations. It aims to help an organization rapidly produce software products and services." (Pierre Pureur & Murat Erder, "Continuous Architecture", 2015)

DevOps is an approach based on lean and agile principles in which business owners and the development, operations, and quality assurance departments collaborate to deliver software in a continuous manner that enables the business to more quickly seize market opportunities and reduce the time to include customer feedback. Indeed, enterprise (Sanjeev Sharma & Bernie Coyne, "DevOps For Dummies" 2nd Ed, 2015)

"Is a method for software development and management that integrates the development and deployment cycles to achieve a more agile, continuous evolution of software-based products and services" (Diego R López & Pedro A. Aranda, "Network Functions Virtualization: Going beyond the Carrier Cloud", 2015)

"DevOps is a mindset, a culture, and a set of technical practices. It provides communication, integration, automation, and close cooperation among all the people needed to plan, develop, test, deploy, release, and maintain a Solution." (Dean Leffingwell, "SAFe 4.5 Reference Guide: Scaled Agile Framework for Lean Enterprises" 2nd Ed., 2018)

"Short for development operations, an information technology environment in which development and operations are tightly tied together, yielding small incremental releases to gain user feedback." (O Sami Saydjari, "Engineering Trustworthy Systems: Get Cybersecurity Design Right the First Time", 2018)

"The practice of incorporating developers and members of operations and quality assurance (QA) staff into software development projects to align their incentives and enable frequent, efficient, and reliable releases of software products." (Shon Harris & Fernando Maymi, "CISSP All-in-One Exam Guide" 8th Ed., 2018)

"The tighter integration between the developers of applications and the IT department that tests and deploys them. DevOps is said to be the intersection of software engineering, quality assurance, and operations." (William Stallings, "Effective Cybersecurity: A Guide to Using Best Practices and Standards", 2018)

"A software engineering practice that aims at unifying software development (Dev) and software operation (Ops)." (Jun Bi et al, "Automatic Address Scheduling and Management for Broadband IP Networks", Emerging Automation Techniques for the Future Internet, 2019)

"Develop operations, or DevOps, is an agile methodology that merges the functions of software development and operations in the enterprise software development domain. This approach has been adopted in the networking world to facilitate a programmable approach to network operations. Often when applied to networking the term is changed to NetOps." (Patrick Moore, "Model-Centric Fulfillment Operations and Maintenance Automation", Emerging Automation Techniques for the Future Internet, 2019)

"Practices and technologies that promote tighter coupling of software development (Dev) and operations (Ops) - typically marked by more automation, continuous monitoring, shorter development cycles and higher deployment frequencies. A key driver for security policy automation. DevSecOps is a related term that refers to practices and technologies that aim to embed security in DevOps practices." (Myo Zarny et al, "Network Security Policy Automation: Enterprise Use Cases and Methodologies", 2019)

"Development and operations is an abbreviation for 'development' and 'operations'; is a software engineering methodology for managing software development (Dev) and technology operations (Ops). The main aim of DevOps is to enable automation and tracing for all phases of software implementation, from integration, testing, releasing to deployment and infrastructure management." (Antoine Trad & Damir Kalpić, "Using Applied Mathematical Models for Business Transformation", 2020)

"Development and operations (DevOps) has been adopted by prominent software and service companies (e.g., IBM) to support enhanced collaboration across the company and its value chain partners. In this way, DevOps facilitates uninterrupted delivery and coexistence between development and operation facilities, enhances the quality and performance of software applications, improving end-user experience, and help to simultaneous deployment of software across different platforms." (Kamalendu Pal & Bill Karakostas, "Software Testing Under Agile, Scrum, and DevOps", 2021)

"DevOps is a sprint-based approach that can catch coding flaws during the development of code due to security reviews, rework on previous sprint cycles, and testing." (David A Bird, "Hacker and Non-Attributed State Actors", Real-Time and Retrospective Analyses of Cyber Security, 2021)

"It is a set of practices emerging to bridge the gaps between operation and developer teams to achieve a better collaboration." (Mirna Muñoz, "Boosting the Competitiveness of Organizations With the Use of Software Engineering", 2021)

"It is a way to work were the software is rapidly developed and immediately deployed for operating in a computational productive environment. It is continuous delivery product development lifecycle. It must automate the development process. DevOps is both a culture and a set of technologies and tools used for automation." Laura C Rodriguez-Martinez et al, "Service-Oriented Computing Applications (SOCA) Development Methodologies: A Review of Agility-Rigor Balance", 2021)

"People from software development and operations work together to enhance the speed of delivery of new software features. It is a concept for bridging the gap between software development and software operations and integrating the logic of common responsibility for the complete software delivery lifecycle into one cross-functional team." (Anna Wiedemann et al, "Transforming Disciplined IT Functions: Guidelines for DevOps Integration", 2021)

"DevOps is a set of tools and processes that help automate IT operations." (Aniruddha Deswandikar,"Engineering Data Mesh in Azure Cloud", 2024)

"DevOps is a catch‑all term for the blending of roles between developers and operations engineers. As the barriers between roles such as database administrator, systems administrator, and software engineer have eroded, the term DevOps has emerged as a way of describing the intersection of responsibilities from all these camps, and their increasing interrelation in the lifecycle of a product. A crucial enabling aspect of this movement is the increased use of automation in building, deploying, and monitoring large applications." (NGINX) [source]

"DevOps is a collection of best practices and working methods for the software development process whose cumulative goal is to shorten the development life cycle and support practice such as continuous integration, continuous delivery and continuous deployment." (Sum Logic) [source]

"DevOps is a set of practices that works to automate and integrate the processes between software development and IT teams, so they can build, test, and release software faster and more reliably." Atlassian [source

"DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes." (Amazon) [source]

"DevOps refers to a broad range of practices related to the development and operation of software code in production in cloud data centers. DevOps is centered in Agile project management techniques and microservice support. DevOps approaches the entire software development lifecycle with automation based around version control standards." (VMWare) [source]

"The cultural movement that stresses communication, collaboration and integration between software developers and IT operations." (Global Knowledge)

15 April 2007

🌁Software Engineering: Test Plan (Definitions)

"Test case scenarios written to test the different functionalities of the system and make sure it meets requirements." (Timothy J  Kloppenborg et al, "Project Leadership", 2003)

"A document describing the scope, approach, characteristics and features to be tested, necessary tasks, resources, task schedule, responsibilities, and risks." (Richard D Stutzke, "Estimating Software-Intensive Systems: Projects, Products, and Processes", 2005)

[engagement test plan:] "A plan designed to gather sufficient appropriate evidence to support an evaluation of how well the key controls function. This may be developed as part of an engagement work program or may replace the engagement work program." (Sally-Anne Pitt, "Internal Audit Quality", 2014)

[level test plan:] "A plan for a specified level of testing. It identifies the items being tested, the features to be tested, the testing tasks to be performed, the personnel responsible for each task, and the associated risk(s). In the title of the plan, the word level is replaced by the organization’s name for the particular level being documented by the plan (e.g., Component Test Plan, Component Integration Test Plan, System Test Plan, and Acceptance Test Plan)." (Tilo Linz et al, "Software Testing Foundations" 4th Ed, 2014)

"It identifies, among other test items, the features to be tested, the testing tasks, who will do each task, the degree of tester independence, the test environment, the test design techniques, and the techniques for measuring results with a rationale for their choice. Additionally, risks requiring contingency planning are described. Thus, a test plan is a record of the test planning process. The document can be divided into a master test plan or a level test plan." (Tilo Linz et al, "Software Testing Foundations, 4th Ed", 2014)

"A document that specifies how a program is to be tested" (Nell Dale & John Lewis, "Computer Science Illuminated" 6th Ed., 2015)

"A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process." (IEEE 829)

"A document that outlines the specific steps that will be performed for a particular test, including the required logistical items and expected outcome or response for each step." (NIST SP 800-84)

[level test plan:] "A test plan that typically addresses one test level." (ISTQB)

30 March 2007

🌁Software Engineering: Functional Testing (Definitions)

"Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions." (Richard D Stutzke, "Estimating Software-Intensive Systems: Projects, Products, and Processes", 2005)

"1. Checking functional requirements. 2. Dynamic test for which the test cases are developed based on an analysis of the functional specification of the test object. The completeness of this test (its coverage) is assessed using the functional specification." (Tilo Linz et al, "Software Testing Foundations, 4th Ed", 2014)

"Security testing in which advertised security mechanisms of an information system are tested under operational conditions to determine if a given function works according to its requirements." (William Stallings, "Effective Cybersecurity: A Guide to Using Best Practices and Standards", 2018)

"Testing based on an analysis of the specification of the functionality of a component or system." (Software Quality Assurance)

24 February 2007

🌁Software Engineering: Quality Assurance (Definitions)

"A planned and systematic means for assuring management that the defined standards, practices, procedures, and methods of the process are applied." (Sandy Shrum et al, "CMMI®: Guidelines for Process Integration and Product Improvement", 2003)

"(1) Activities to ensure that an item or work product conforms to established requirements and standards. Also called product assurance. (2) Activities to ensure that defined standards, procedures, and processes are applied." (Richard D Stutzke, "Estimating Software-Intensive Systems: Projects, Products, and Processes", 2005)

"A business function that is responsible for the (often statistically based) evaluation of how well a product conforms to requirements (according to the PRD) and specifications (according to the PSD)." (Steven Haines, "The Product Manager's Desk Reference", 2008)

"A process for evaluating project performance in relation to the specified standard of quality." (Bonnie Biafore, "Successful Project Management: Applying Best Practices and Real-World Techniques with Microsoft® Project", 2011)

"All activities within quality management focused on providing confidence that quality requirements are fulfilled." (Tilo Linz et al, "Software Testing Foundations" 4th Ed., 2014)

"Part of quality management focused on providing confidence that quality requirements will be fulfilled." (ISO 9000)


Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.