"There is nothing in any object, consider'd in itself, which can afford us a reason for drawing a conclusion beyond it; […] even after the observation of the frequent or constant conjunction of objects, we have no reason to draw any inference concerning any object beyond those of which we have had experience." (David Hume, "A Treatise of Human Nature", 1739) [considered as the first attempt to underline the limits of inductive inference]
"Consider any of the heuristics that people have come up with for supervised learning: avoid overfitting, prefer simpler to more complex models, boost your algorithm, bag it, etc. The no free lunch theorems say that all such heuristics fail as often (appropriately weighted) as they succeed. This is true despite formal arguments some have offered trying to prove the validity of some of these heuristics." (David H Wolpert, "The lack of a priori distinctions between learning algorithms", Neural Computation Vol. 8(7), 1996)
"[...] an algorithm’s average performance is determined by how 'aligned' it is with the underlying probability distribution over optimization problems on which it is run." (David H Wolpert & William G Macready, "No free lunch theorems for optimization", IEEE Transactions on Evolutionary Computation 1 (1), 1997)
"[...] despite the NFL theorems, algorithms can have a priori distinctions that hold even if nothing is specified concerning the optimization problems. In particular, we show that there can be 'head-to-head' minimax distinctions between a pair of algorithms, i.e., that when considering one function at a time ,a pair of algorithms may be distinguishable, even if they are not when one looks over all functions." (David H Wolpert & William G Macready, "No free lunch theorems for optimization", IEEE Transactions on Evolutionary Computation 1 (1), 1997)
"The NFL theorems do not directly address minimax properties of search. For example, say we are considering two deterministic algorithms a1 and a2. It may very well be that there exist cost functions such that a1’s histogram is much better (according to some appropriate performance measure) than a2’s, but no cost functions for which the reverse is true. For the NFL theorem to be obeyed in such a scenario, it would have to be true that there are many more f for which a2’s histogram is better than a1’s than vice-versa, but it is only slightly better for all those f. For such a scenario, in a certain sense a1 has better 'head-to-head' minimax behavior than a2; there are f for which a1 beats a2 badly, but none for which a1 does substantially worse than a2." (David H Wolpert & William G Macready, "No free lunch theorems for optimization", IEEE Transactions on Evolutionary Computation 1 (1), 1997)
"[...] the NFL theorems mean that if an algorithm does particularly well on average for one class of problems then it must do worse on average over the remaining problems. In particular, if an algorithm performs better than random search on some class of problems then in must perform worse than random search on the remaining problems. Thus comparisons reporting the performance of a particular algorithm with a particular parameter setting on a few sample problems are of limited utility. While such results do indicate behavior on the narrow range of problems considered, one should be very wary of trying to generalize those results to other problems." (David H Wolpert & William G Macready, "No free lunch theorems for optimization", IEEE Transactions on Evolutionary Computation 1 (1), 1997)
"[...] if you have a general optimization involving uncertainty and very little prior knowledge, the situation is rather hopeless. Due to the NFL theorem, you cannot do any better than a blind search. Each blind search evaluation will be very expensive, with no hope of future improvement, theoretical or otherwise. And the number of performance searches required to get anywhere is simply too large. Neither time nor theoretical or technological progress are on your side. No grand optimization algorithm to end all algorithms is possible." (Yu-Chi Ho, "The no free lunch theorem and the human-machine interface", IEEE Control Systems Magazine, 1999)
"The No Free Lunch (NFL) theorem […] tells us that without
any structural assumptions on an optimization problem, no algorithm can perform
better on average than blind search." (Yu-Chi Ho, "The no free lunch theorem and
the human-machine interface", IEEE Control Systems Magazine, 1999)
"[...] a general-purpose universal optimization strategy is theoretically impossible, and the only way one strategy can outperform another is if it is specialized to the specific problem under consideration." (Yu-Chi Ho & David L Pepyne, "Simple explanation of the no-free-lunch theorem and its implications", Journal of Optimization Theory and Applications 115, 2002)
"Because No Free Lunch is a well-known concept, researchers are increasingly interested in finding function classes that are well matched to their algorithms. The determination of an algorithm–class pairing is generally approximated by testing the algorithm on one of a number of benchmark functions, then generalizing the results by distilling various defining characteristics of those benchmarks." (Christopher K Monson, "No Free Lunch, Bayesian Inference, and Utility: A Decision-Theoretic Approach to Optimization", [thesis] 2006)
"Because No Free Lunch theorems dictate that no optimization algorithm can be considered more efficient than any other when considering all possible functions, the desired function class plays a prominent role in the model. In particular, this provides a tractable way to answer the traditionally difficult question of what algorithm is best matched to a particular class of functions. Among the benefits of the model are the ability to specify the function class in a straightforward manner, a natural way to specify noisy or dynamic functions, and a new source of insight into No Free Lunch theorems for optimization." (Christopher K Monson, "No Free Lunch, Bayesian Inference, and Utility: A Decision-Theoretic Approach to Optimization", [thesis] 2006)
"No Free Lunch dictates that any algorithm may be deceived, a difficulty to which the inference algorithm is not immune." (Christopher K Monson, "No Free Lunch, Bayesian Inference, and Utility: A Decision-Theoretic Approach to Optimization", [thesis] 2006)
"That the complexity of the problem is inherently tied to the flexibility of the representation of the prior serves to clarify part of No Free Lunch, especially the proof that problem difficulties cannot be ranked in the absence of a specific algorithm: it is not, in fact, the choice of algorithm that makes ranking possible among problems, but the choice of representation." (Christopher K Monson, "No Free Lunch, Bayesian Inference,and Utility: A Decision Theoretic Approach to Optimization", [thesis] 2006)
"The No Free Lunch work is a framework that addresses the core aspects of search, focusing on the connection between fitness functions and effective search algorithms. The central importance of this connection is demonstrated by the No Free Lunch theorem, which states that, averaged over all problems, all search algorithms perform equally. This result implies that if we are comparing a genetic algorithm to some other algorithm (e.g., simulated annealing, or even random search) and the genetic algorithm to some other algorithm (e.g., simulated annealing, or even random search) performs better for some class of problems, then the other algorithm necessarily performs better on problems outside the class. Thus it is essential to incorporate knowledge of the problem into the search algorithm." (S N Sivanandam & S N Deepa, "Introduction to Genetic Algorithms", 2008)
"A priori, it is clear that no method will always be the best [...]. However, it is reasonable to argue that each method will have a set of functions, a type of data, and a range of sample sizes for which it is optimal – a sort of catchment region for each procedure. Ideally, one could partition a space of regression problems into catchment regions, depending on which methods were under consideration, and determine which catchment region seemed most appropriate for each method. This ideal solution would amount to a selection principle for nonparametric methods. Unfortunately, it is unclear how to do this, not least because the catchment regions are unknown." (Bertrand Clarke et al, "Principles and Theory for Data Mining and Machine Learning", 2009)
"Methods perform well if the conditions of their derivation are met, with no method capable of covering all possible conditions. More strongly put, each good method has a domain on which it may be best, and different methods have different domains so the task is to characterize those domains and then figure out which domain a given problem’s solution is likely to be in. This of course, is extraordinarily difficult in its own right." (Bertrand Clarke et al, "Principles and Theory for Data Mining and Machine Learning", 2009)
"The well-known 'No Free Lunch' theorem indicates that there does not exist a pattern classification method that is inherently superior to any other, or even to random guessing without using additional information. It is the type of problem, prior information, and the amount of training samples that determine the form of classifier to apply. In fact, corresponding to different real-world problems, different classes may have different underlying data structures. A classifier should adjust the discriminant boundaries to fit the structures which are vital for classification, especially for the generalization capacity of the classifier." (Hui Xue et al, "SVM: Support Vector Machines", 2009)
"Another important fact, having impact on EA [Evolutionary Algorithm] use is so called No Free Lunch Theorem (NFLT) [...]. Main idea of this theorem is that there is no ideal algorithm which would be able to solve any problem. Simply, if there are for example two algorithms A and B, then for certain subset of possible problems is more suitable algorithms A and for another subset algorithm B. All those subsets can be of course totally disconnected, or/and overlapped." (Ivan Zelinka & Hendrik Richter, "Evolutionary Algorithms for Chaos Researchers", Studies in Computational Intelligence Vol. 267, 2010)
"Each learning algorithm dictates a certain model that comes with a set of assumptions. This inductive bias leads to error if the assumptions do not hold for the data. Learning is an ill-posed problem and with finite data, each algorithm converges to a different solution and fails under different circumstances. The performance of a learner may be fine-tuned to get the highest possible accuracy on a validation set, but this finetuning is a complex task and still there are instances on which even the best learner is not accurate enough. The idea is that there may be another base-learner learner that is accurate on these. By suitably combining multiple base learners then, accuracy can be improved." (Ethem Alpaydin, "Introduction to Machine Learning" 2nd Ed, 2010)
"The problem of comparing classifiers is not at all an easy task. There is no single classifier that works best on all given problems, phenomenon related to the 'No-free-lunch' metaphor, i.e., each classifier (’restaurant’) provides a specific technique associated with the corresponding costs (’menu’ and ’price’ for it). It is hence up to us, using the information and knowledge at hand, to find the optimal trade-off." (Florin Gorunescu, "Data Mining Concepts, Models and Techniques", 2011)
"As a consequence of the no free lunch theorem, we need to develop many different types of models, to cover the wide variety of data that occurs in the real world. And for each model, there may be many different algorithms we can use to train the model, which make different speed-accuracy-complexity tradeoffs." (Kevin P Murphy, "Machine Learning: A Probabilistic Perspective", 2012)
"Much of machine learning is concerned with devising different models, and different algorithms to fit them. We can use methods such as cross validation to empirically choose the best method for our particular problem. However, there is no universally best model - this is sometimes called the no free lunch theorem. The reason for this is that a set of assumptions that works well in one domain may work poorly in another." (Kevin P Murphy, "Machine Learning: A Probabilistic Perspective", 2012)
"The validity of NFL theorems largely depends on the validity
of their fundamental assumptions. However, whether these assumptions are valid
in practice is another question. Often, these assumptions are too stringent,
and thus free lunches are possible." (Xin-She Yang, "Free Lunch or No Free Lunch:
That is not Just a Question?", 2012)
"The idea of feature learning is to automate the process of finding a good representation of the input space. As mentioned before, the No-Free-Lunch theorem tells us that we must incorporate some prior knowledge on the data distribution in order to build a good feature representation." (Shai Shalev-Shwartz & Shai Ben-David, "Understanding Machine Learning: From Theory to Algorithms", 2014)
"We emphasize that while there are some common techniques for feature learning one may want to try, the No-Free-Lunch theorem implies that there is no ultimate feature learner. Any feature learning algorithm might fail on some problem. In other words, the success of each feature learner relies (sometimes implicitly) on some form of prior assumption on the data distribution. Furthermore, the relative quality of features highly depends on the learning algorithm we are later going to apply using these features." (Shai Shalev-Shwartz & Shai Ben-David, "Understanding Machine Learning: From Theory to Algorithms", 2014)
"Choosing an appropriate classification algorithm for a particular problem task requires practice: each algorithm has its own quirks and is based on certain assumptions. To restate the 'No Free Lunch' theorem: no single classifier works best across all possible scenarios. In practice, it is always recommended that you compare the performance of at least a handful of different learning algorithms to select the best model for the particular problem; these may differ in the number of features or samples, the amount of noise in a dataset, and whether the classes are linearly separable or not." (Sebastian Raschka, "Python Machine Learning", 2015)
"Learning theory claims that a machine learning algorithm can generalize well from a finite training set of examples. This seems to contradict some basic principles of logic. Inductive reasoning, or inferring general rules from a limited set of examples, is not logically valid. To logically infer a rule describing every member of a set, one must have information about every member of that set." (Ian Goodfellow et al, "Deep Learning", 2015)
"The no free lunch theorem for machine learning states that, averaged over all possible data generating distributions, every classification algorithm has the same error rate when classifying previously unobserved points. In other words, in some sense, no machine learning algorithm is universally any better than any other. The most sophisticated algorithm we can conceive of has the same average performance (over all possible tasks) as merely predicting that every point belongs to the same class. [...] the goal of machine learning research is not to seek a universal learning algorithm or the absolute best learning algorithm. Instead, our goal is to understand what kinds of distributions are relevant to the 'real world' that an AI agent experiences, and what kinds of machine learning algorithms perform well on data drawn from the kinds of data generating distributions we care about." (Ian Goodfellow et al, "Deep Learning", 2015)
"The no free lunch theorem implies that we must design our machine learning algorithms to perform well on a specific task. We do so by building a set of preferences into the learning algorithm. When these preferences are aligned with the learning problems we ask the algorithm to solve, it performs better." (Ian Goodfellow et al, "Deep Learning", 2015)
"The 'No free lunch' theorem demonstrates that it is not possible to find one algorithm behaving better for any problem. On the other hand, we know that we can work with different degrees of knowledge of the problem which we expect to solve, and that it is not the same to work without knowledge of the problem (hypothesis of the 'no free lunch' theorem) than to work with partial knowledge about the problem, knowledge that allows us to design algorithms with specific characteristics which can make them more suitable to solve of the problem." (Salvador García et al, "Data Preprocessing in Data Mining", 2015)
"Roughly stated, the No Free Lunch theorem states that in the lack of prior knowledge (i.e. inductive bias) on average all predictive algorithms that search for the minimum classification error (or extremum over any risk metric) have identical performance according to any measure." (N D Lewis, "Deep Learning Made Easy with R: A Gentle Introduction for Data Science", 2016)
"However, because ML algorithms are biased to look for different types of patterns, and because there is no one learning bias across all situations, there is no one best ML algorithm. In fact, a theorem known as the 'no free lunch theorem' states that there is no one best ML algorithm that on average outperforms all other algorithms across all possible data sets." (John D Kelleher & Brendan Tierney, "Data Science", 2018)
"The no free lunch theorems set limits on the range of optimality of any method. That is, each methodology has a ‘catchment area’ where it is optimal or nearly so. Often, intuitively, if the optimality is particularly strong then the effectiveness of the methodology falls off more quickly outside its catchment area than if its optimality were not so strong. Boosting is a case in point: it seems so well suited to binary classification that efforts to date to extend it to give effective classification (or regression) more generally have not been very successful. Overall, it remains to characterize the catchment areas where each class of predictors performs optimally, performs generally well, or breaks down." (Bertrand S Clarke & Jennifer L. Clarke, "Predictive Statistics: Analysis and Inference beyond Models", 2018)
"The No Free Lunch theorems prove that under a uniform
distribution over induction problems (search problems or learning problems), all
induction algorithms perform equally. […]
the importance of the theorems arises by using them to analyze scenarios
involving non-uniform distributions, and to compare different algorithms,
without any assumption about the distribution over problems at all. In
particular, the theorems prove that anti-cross-validation (choosing among a set
of candidate algorithms based on which has worst out-of-sample behavior) performs
as well as cross-validation, unless one makes an assumption - which has never
been formalized - about how the distribution over induction problems, on the one
hand, is related to the set of algorithms one is choosing among using (anti-)cross
validation, on the other. In addition, they establish strong caveats concerning
the significance of the many results in the literature which establish the
strength of a particular algorithm without assuming a particular distribution." (David H Wolpert, "What is important about the No Free Lunch theorems?", 2020)
"A well-known theorem called the 'no free lunch' theorem proves exactly what we anecdotally witness when designing and building learning systems. The theorem states that any bias-free learning system will perform no better than chance when applied to arbitrary problems. This is a fancy way of stating that designers of systems must give the system a bias deliberately, so it learns what’s intended. As the theorem states, a truly bias- free system is useless." (Erik J Larson, "The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do", 2021)