"Program testing can be used to show the presence of bugs, but never to show their absence!" (Edsger W Dijkstra, "Notes On Structured Programming", 1970)
"Test input for validity and plausibility. [...] Make sure input cannot violate the limits of the program. [...] Identify bad input; recover if possible. [...] Test programs at their boundary values." (Brian W Kernighan & Phillip J Plauger, "The Elements of Programming Style", 1974)
"Watch out for off-by-one errors. A common cause of off-by-one errors is an incorrect test, for example using "greater than" when "greater than or equal to" is actually needed. This program is a binary search routine, which looks for a particular element in a table by halving the interval in which the element might lie, until it ultimately either finds it, or deduces that it isn't present." (Brian W Kernighan & Phillip J Plauger, "The Elements of Programming Style", 1974)
"Write and test a big program in small pieces." (Brian W Kernighan & Phillip J Plauger, "The Elements of Programming Style", 1974)
"Object-oriented programming languages support encapsulation, thereby improving the ability of software to be reused, refined, tested, maintained, and extended. The full benefit of this support can only be realized if encapsulation is maximized during the design process. […] design practices which take a data-driven approach fail to maximize encapsulation because they focus too quickly on the implementation of objects." (Rebecca Wirfs-Brock, "Object-oriented Design: A. responsibility-driven approach", 1989)
"A design remedy that prevents bugs is always preferable to a test method that discovers them." (Boris Beizer, "Software Testing Techniques", 1990)
"A test that reveals a bug has succeeded, not failed." (Boris Beizer, "Software Testing Techniques", 1990)
"More than the act of testing, the act of designing tests is one of the best bug preventers known. The thinking that must be done to create a useful test can discover and eliminate bugs before they are coded - indeed, test-design thinking can discover and eliminate bugs at every stage in the creation of software, from conception to specification, to design, coding and the rest." (Boris Beizer, "Software Testing Techniques", 1990)
"Programmers are responsible for software quality - quality in their own work, quality in the products that incorporate their work, and quality at the interfaces between components. Quality has never been and will never be tested in. The responsibility is both moral and professional." (Boris Beizer, "Software Testing Techniques", 1990)
"A problem with this 'waterfall' approach is that there will then be no user interface to test with real users until this last possible moment, since the 'intermediate work products' do not explicitly separate out the user interface in a prototype with which users can interact. Experience also shows that it is not possible to involve the users in the design process by showing them abstract specifications documents, since they will not understand them nearly as well as concrete prototypes."
"Testing by itself does not improve software quality. Test results are an indicator of quality, but in and of themselves, they don't improve it. Trying to improve software quality by increasing the amount of testing is like trying to lose weight by weighing yourself more often. What you eat before you step onto the scale determines how much you will weigh, and the software development techniques you use determine how many errors testing will find. If you want to lose weight, don't buy a new scale; change your diet. If you want to improve your software, don't test more; develop better." (Steve C McConnell, "Code Complete: A Practical Handbook of Software Construction", 1993)
"The entire idea behind prototyping is to save on the time and cost to develop something that can be tested with real users. These savings can only be achieved by somehow reducing the prototype compared with the full system: either cutting down on the number of features in the prototype or reducing the level of functionality of the features such that they seem to work but do not actually do anything."
"The real value of tests is not that they detect bugs in the code, but that they detect inadequacies in the methods, concentration, and skills of those who design and produce the code." (Charles A R Hoare, "How Did Software Get So Reliable Without Proof?", Lecture Notes in Computer Science Vol. 1051, 1996)
"The longer we wait between integrations and acceptance tests, the worse things get. Wait twice as long and we'll have four or more times the hassle. The reason is that one bug written just yesterday is pretty easy to find, while ten or a hundred written weeks ago can become almost impossible." (Ron Jeffries, "Extreme Programming Installed", 2001)
"Unit tests can be tedious to write, but they save you time in the future (by catching bugs after changes). Less obviously, but just as important, is that they can save you time now: tests focus your design and implementation on simplicity, they support refactoring, and they validate features as you develop." (Ron Jeffries, "Extreme Programming Installed, 2001)
"People also underestimate the time they spend debugging. They underestimate how much time they can spend chasing a long bug. With testing, I know straight away when I added a bug. That lets me fix the bug immediately, before it can crawl off and hide. There are few things more frustrating or time wasting than debugging. Wouldn't it be a hell of a lot quicker if we just didn't create the bugs in the first place?" (Martin Fowler, 2002)
"A system that is comprehensively tested and passes all of its tests all of the time is a testable system. That’s an obvious statement, but an important one. Systems that aren’t testable aren’t verifiable. Arguably, a system that cannot be verified should never be deployed." (Robert C Martin, "Clean Code: A Handbook of Agile Software Craftsmanship", 2008)
"Features have a specification cost, a design cost, and a development cost. There is a testing cost and a reliability cost. […] Features have a documentation cost. Every feature adds pages to the manual increasing training costs." (Douglas Crockford, "JavaScript: The Good Parts: The Good Parts", 2008)
"If the discipline of requirements specification has taught us anything, it is that well-specified requirements are as formal as code and can act as executable tests of that code!"
"It is a myth that we can get systems 'right the first time'. Instead, we should implement only today’s stories, then refactor and expand the system to implement new stories tomorrow. This is the essence of iterative and incremental agility. Test-driven development, refactoring, and the clean code they produce make this work at the code level."
"It is unit tests that keep our code flexible, maintainable, and reusable. The reason is simple. If you have tests, you do not fear making changes to the code! Without tests every change is a possible bug."
"It turns out that strong typing does not eliminate the need for careful testing. And I have found in my work that the sorts of errors that strong type checking finds are no the errors I worry about." (Douglas Crockford, "JavaScript: The Good Parts", 2008)
"Acceptance testing relies on the ability to execute automated tests in a productionlike environment. However, a vital property of such a test environment is that it is able to successfully support automated testing. Automated acceptance testing is not the same as user acceptance testing. One of the differences is that automated acceptance tests should not run in an environment that includes integration to all external systems. Instead, your acceptance testing should be focused on providing a controllable environment in which the system under test can be run. 'Controllable' in this context means that you are able to create the correct initial state for our tests. Integrating with real external systems removes our ability to do this." (David Farley & Jez Humble, "Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation", 2010)
"In many applications, integration or functional tests are used by default as the standard way to test almost all aspects of the system. However integration and functional tests are not the best way to detect and identify bugs. Because of the large number of components involved in a typical end-to-end test, it can be very hard to know where something has gone wrong. In addition, with so many moving parts, it is extremely difficult, if not completely unfeasible, to cover all of the possible paths through the application." (John F Smart, "Jenkins: The Definitive Guide", 2011)
"Systems with high risks must be tested more thoroughly than systems that do not generate big losses if they fail. The risk assessment must be done for the individual system parts, or even for single error possibilities. If there is a high risk for failures by a system or subsystem, there must be a greater testing effort than for less critical (sub)systems. International standards for production of safety-critical systems use this approach to require that different test techniques be applied for software of different integrity levels." (Andreas Spillner et al, "Software Testing Foundations: A Study Guide for the Certified Tester Exam" 4th Ed., 2014)
"But perhaps the biggest problem is that the longer you spend working on something - whether it’s a prototype or a real product - the more attached you’ll become, and the less likely you’ll be to take negative test results to heart. After one day, you’re receptive to feedback. After three months, you’re committed."
"Sometimes you can’t fit everything in. Remember that the sprint is great for testing risky solutions that might have a huge payoff. So you’ll have to reverse the way you would normally prioritize. If a small fix is so good and low-risk that you’re already planning to build it next week, then seeing it in a prototype won’t teach you much. Skip those easy wins in favor of big, bold bets."
"Automated testing is a safety net that protects the program from its programmers." (Yegor Bugayenko, "Code Ahead", 2018)
"Quality is a product of a conflict between programmers and testers." (Yegor Bugayenko, "Code Ahead", 2018)
"Quality must be enforced, otherwise it won't happen. We programmers must be required to write tests, otherwise we won't do it." (Yegor Bugayenko, "Code Ahead", 2018)
"The job of a tester is to prove that the software is bug free, while it has to be the other way around: The job of a tester is to prove that the software is broken. The better testers are doing their jobs, the more bugs they manage to find and report." (Yegor Bugayenko, "Code Ahead", 2018)
"Code coverage can provide some insight into untested code, but it is not a substitute for thinking critically about how well your system is tested." (Titus Winters, "Software Engineering at Google: Lessons Learned from Programming Over Time", 2020)
"Fixing a bug is much like adding a new feature: the presence of the bug suggests that a case was missing from the initial test suite, and the bug fix should include that missing test case." (Titus Winters, "Software Engineering at Google: Lessons Learned from Programming Over Time", 2020)
"In addition to developing the proper culture, invest in your testing infrastructure by developing linters, documentation, or other assistance that makes it more difficult to write bad tests." (Titus Winters, "Software Engineering at Google: Lessons Learned from Programming Over Time", 2020)
"When an engineer refactors the internals of a system without modifying its interface, whether for performance, clarity, or any other reason, the system’s tests shouldn’t need to change. The role of tests in this case is to ensure that the refactoring didn’t change the system’s behavior. Tests that need to be changed during a refactoring indicate that either the change is affecting the system’s behavior and isn’t a pure refactoring, or that the tests were not written at an appropriate level of abstraction." (Titus Winters, "Software Engineering at Google: Lessons Learned from Programming Over Time", 2020)
No comments:
Post a Comment