"Analysis is the obtaining of the thing sought by assuming it and so reasoning up to an admitted truth; synthesis is the obtaining of the thing sought by reasoning up to the inference and proof of it." (Eudoxus, cca. 4th century BC)
"Every stage of science has its train of practical applications and systematic inferences, arising both from the demands of convenience and curiosity, and from the pleasure which, as we have already said, ingenious and active-minded men feel in exercising the process of deduction."
"Truths are known to us in two ways: some are known directly, and of themselves; some through the medium of other truths. The former are the subject of Intuition, or Consciousness; the latter, of Inference; the latter of Inference. The truths known by Intuition are the original premises, from which all others are inferred." (John S Mill, "A System of Logic, Ratiocinative and Inductive", 1858)
"It is experience which has given us our first real knowledge of Nature and her laws. It is experience, in the shape of observation and experiment, which has given us the raw material out of which hypothesis and inference have slowly elaborated that richer conception of the material world which constitutes perhaps the chief, and certainly the most characteristic, glory of the modern mind." (Arthur J Balfour, "The Foundations of Belief", 1912)
"The only thing we know for sure about a missing data point is that it is not there, and there is nothing that the magic of statistics can do change that. The best that can be managed is to estimate the extent to which missing data have influenced the inferences we wish to draw." (Howard Wainer, "14 Conversations About Three Things", Journal of Educational and Behavioral Statistics Vol. 35(1, 2010)
"The study of inductive inference belongs to the theory of probability, since observational facts can make a theory only probable but will never make it absolutely certain." (Hans Reichenbach, "The Rise of Scientific Philosophy", 1951)
"Statistics is the name for that science and art which deals with uncertain inferences - which uses numbers to find out something about nature and experience." (Warren Weaver, 1952)
"The heart of all major discoveries in the physical sciences is the discovery of novel methods of representation and so of fresh techniques by which inferences can be drawn - and drawn in ways which fit the phenomena under investigation." (Stephen Toulmin, "The Philosophy of Science", 1957)
"Assumptions that we make, such as those concerning the form of the population sampled, are always untrue." (David R Cox, "Some problems connected with statistical inference", Annals of Mathematical Statistics 29, 1958)
"Exact truth of a null hypothesis is very unlikely except in a genuine uniformity trial." (David R Cox, "Some problems connected with statistical inference", Annals of Mathematical Statistics 29, 1958)
"[...] the test of significance has been carrying too much of the burden of scientific inference. It may well be the case that wise and ingenious investigators can find their way to reasonable conclusions from data because and in spite of their procedures. Too often, however, even wise and ingenious investigators [...] tend to credit the test of significance with properties it does not have." (David Bakan, "The test of significance in psychological research", Psychological Bulletin 66, 1966)
"[...] we need to get on with the business of generating [...] hypotheses and proceed to do investigations and make inferences which bear on them, instead of [...] testing the statistical null hypothesis in any number of contexts in which we have every reason to suppose that it is false in the first place." (David Bakan, "The test of significance in psychological research", Psychological Bulletin 66, 1966)
"An analogy is a relationship between two entities, processes, or what you will, which allows inferences to be made about one of the things, usually that about which we know least, on the basis of what we know about the other. […] The art of using analogy is to balance up what we know of the likenesses against the unlikenesses between two things, and then on the basis of this balance make an inference as to what is called the neutral analogy, that about which we do not know." (Rom Harré," The Philosophies of Science" , 1972)
"Almost all efforts at data analysis seek, at some point, to generalize the results and extend the reach of the conclusions beyond a particular set of data. The inferential leap may be from past experiences to future ones, from a sample of a population to the whole population, or from a narrow range of a variable to a wider range. The real difficulty is in deciding when the extrapolation beyond the range of the variables is warranted and when it is merely naive. As usual, it is largely a matter of substantive judgment - or, as it is sometimes more delicately put, a matter of 'a priori nonstatistical considerations'."
"Pencil and paper for construction of distributions, scatter diagrams, and run-charts to compare small groups and to detect trends are more efficient methods of estimation than statistical inference that depends on variances and standard errors, as the simple techniques preserve the information in the original data." (W Edwards Deming, "On Probability as Basis for Action", American Statistician, Volume 29, Number 4, November 1975)
"The advantage of semantic networks over standard logic is that some selected set of the possible inferences can be made in a specialized and efficient way. If these correspond to the inferences that people make naturally, then the system will be able to do a more natural sort of reasoning than can be easily achieved using formal logical deduction." (Avron Barr, Natural Language Understanding, AI Magazine Vol. 1 (1), 1980)
"Another reason for the applied statistician to care about Bayesian inference is that consumers of statistical answers, at least interval estimates, commonly interpret them as probability statements about the possible values of parameters. Consequently, the answers statisticians provide to consumers should be capable of being interpreted as approximate Bayesian statements." (Donald B Rubin, "Bayesianly justifiable and relevant frequency calculations for the applied statistician", Annals of Statistics 12(4), 1984)
"The grotesque emphasis on significance tests in statistics courses of all kinds [...] is taught to people, who if they come away with no other notion, will remember that statistics is about tests for significant differences. [...] The apparatus on which their statistics course has been constructed is often worse than irrelevant, it is misleading about what is important in examining data and making inferences." (John A Nelder, "Discussion of Dr Chatfield’s paper", Journal of the Royal Statistical Society A 148, 1985)
"Models are often used to decide issues in situations marked by uncertainty. However statistical differences from data depend on assumptions about the process which generated these data. If the assumptions do not hold, the inferences may not be reliable either. This limitation is often ignored by applied workers who fail to identify crucial assumptions or subject them to any kind of empirical testing. In such circumstances, using statistical procedures may only compound the uncertainty." (David A Greedman & William C Navidi, "Regression Models for Adjusting the 1980 Census", Statistical Science Vol. 1 (1), 1986)
"It is difficult to distinguish deduction from what in other circumstances is called problem-solving. And concept learning, inference, and reasoning by analogy are all instances of inductive reasoning. (Detectives typically induce, rather than deduce.) None of these things can be done separately from each other, or from anything else. They are pseudo-categories." (Frank Smith, "To Think: In Language, Learning and Education", 1990)
"No one has ever shown that he or she had a free lunch. Here, of course, 'free lunch' means 'usefulness of a model that is locally easy to make inferences from'. (John Tukey, "Issues relevant to an honest account of data-based inference, partially in the light of Laurie Davies’ paper", 1993)
"Probabilistic inference is the classical paradigm for data analysis in science and technology. It rests on a foundation of randomness; variation in data is ascribed to a random process in which nature generates data according to a probability distribution. This leads to a codification of uncertainly by confidence intervals and hypothesis tests." (William S Cleveland, "Visualizing Data", 1993)
"When the distributions of two or more groups of univariate data are skewed, it is common to have the spread increase monotonically with location. This behavior is monotone spread. Strictly speaking, monotone spread includes the case where the spread decreases monotonically with location, but such a decrease is much less common for raw data. Monotone spread, as with skewness, adds to the difficulty of data analysis. For example, it means that we cannot fit just location estimates to produce homogeneous residuals; we must fit spread estimates as well. Furthermore, the distributions cannot be compared by a number of standard methods of probabilistic inference that are based on an assumption of equal spreads; the standard t-test is one example. Fortunately, remedies for skewness can cure monotone spread as well." (William S Cleveland, "Visualizing Data", 1993)
"In the design of experiments, one has to use some informal prior knowledge. How does one construct blocks in a block design problem for instance? It is stupid to think that use is not made of a prior. But knowing that this prior is utterly casual, it seems ludicrous to go through a lot of integration, etc., to obtain ‘exact’ posterior probabilities resulting from this prior. So, I believe the situation with respect to Bayesian inference and with respect to inference, in general, has not made progress. Well, Bayesian statistics has led to a great deal of theoretical research. But I don’t see any real utilizations in applications, you know. Now no one, as far as I know, has examined the question of whether the inferences that are obtained are, in fact, realized in the predictions that they are used to make." (Oscar Kempthorne, "A conversation with Oscar Kempthorne", Statistical Science vol. 10, 1995)
"The science of statistics may be described as exploring, analyzing and summarizing data; designing or choosing appropriate ways of collecting data and extracting information from them; and communicating that information. Statistics also involves constructing and testing models for describing chance phenomena. These models can be used as a basis for making inferences and drawing conclusions and, finally, perhaps for making decisions." (Fergus Daly et al, "Elements of Statistics", 1995)
"Theories rarely arise as patient inferences forced by accumulated facts. Theories are mental constructs potentiated by complex external prods (including, in idealized cases, a commanding push from empirical reality)." (Stephen J Gould, "Leonardo's Mountain of Clams and the Diet of Worms" , 1998)
"Let us regard a proof of an assertion as a purely mechanical procedure using precise rules of inference starting with a few unassailable axioms. This means that an algorithm can be devised for testing the validity of an alleged proof simply by checking the successive steps of the argument; the rules of inference constitute an algorithm for generating all the statements that can be deduced in a finite number of steps from the axioms." (Edward Beltrami, "What is Random?: Chaos and Order in Mathematics and Life", 1999)
"[…] philosophical theories are structured by conceptual metaphors that constrain which inferences can be drawn within that philosophical theory. The (typically unconscious) conceptual metaphors that are constitutive of a philosophical theory have the causal effect of constraining how you can reason within that philosophical framework." (George Lakoff, "Philosophy in the Flesh: The Embodied Mind and its Challenge to Western Thought", 1999)
"Even if our cognitive maps of causal structure were perfect, learning, especially double-loop learning, would still be difficult. To use a mental model to design a new strategy or organization we must make inferences about the consequences of decision rules that have never been tried and for which we have no data. To do so requires intuitive solution of high-order nonlinear differential equations, a task far exceeding human cognitive capabilities in all but the simplest systems." (John D Sterman, "Business Dynamics: Systems thinking and modeling for a complex world", 2000)
"Bayesian inference is a controversial approach because it inherently embraces a subjective notion of probability. In general, Bayesian methods provide no guarantees on long run performance."
"The Bayesian approach is based on the following postulates: (B1) Probability describes degree of belief, not limiting frequency. As such, we can make probability statements about lots of things, not just data which are subject to random variation. […] (B2) We can make probability statements about parameters, even though they are fixed constants. (B3) We make inferences about a parameter θ by producing a probability distribution for θ. Inferences, such as point estimates and interval estimates, may then be extracted from this distribution."
"Statistical inference, or 'learning' as it is called in computer science, is the process of using data to infer the distribution that generated the data."
"A mental model is conceived […] as a knowledge structure possessing slots that can be filled not only with empirically gained information but also with ‘default assumptions’ resulting from prior experience. These default assumptions can be substituted by updated information so that inferences based on the model can be corrected without abandoning the model as a whole. Information is assimilated to the slots of a mental model in the form of ‘frames’ which are understood here as ‘chunks’ of knowledge with a well-defined meaning anchored in a given body of shared knowledge." (Jürgen Renn, "Before the Riemann Tensor: The Emergence of Einstein’s Double Strategy", "The Universe of General Relativity" Ed. by A.J. Kox & Jean Eisenstaedt, 2005)
"Statistics is the branch of mathematics that uses observations and measurements called data to analyze, summarize, make inferences, and draw conclusions based on the data gathered." (Allan G Bluman, "Probability Demystified", 2005)
"In specific cases, we think by applying mental rules, which are similar to rules in computer programs. In most of the cases, however, we reason by constructing, inspecting, and manipulating mental models. These models and the processes that manipulate them are the basis of our competence to reason. In general, it is believed that humans have the competence to perform such inferences error-free. Errors do occur, however, because reasoning performance is limited by capacities of the cognitive system, misunderstanding of the premises, ambiguity of problems, and motivational factors. Moreover, background knowledge can significantly influence our reasoning performance. This influence can either be facilitation or an impedance of the reasoning process." (Carsten Held et al, "Mental Models and the Mind", 2006)
"[…] statistics is the key discipline for predicting the future or for making inferences about the unknown, or for producing convenient summaries of data." (David J Hand, "Statistics: A Very Short Introduction", 2008)
"The only thing we know for sure about a missing data point is that it is not there, and there is nothing that the magic of statistics can do change that. The best that can be managed is to estimate the extent to which missing data have influenced the inferences we wish to draw." (Howard Wainer, "14 Conversations About Three Things", Journal of Educational and Behavioral Statistics Vol. 35(1, 2010)
"When statistical inferences, such as p-values, follow extensive looks at the data, they no longer have their usual interpretation. Ignoring this reality is dishonest: it is like painting a bull’s eye around the landing spot of your arrow. This is known in some circles as p-hacking, and much has been written about its perils and pitfalls." (Robert E Kass et all, "Ten Simple Rules for Effective Statistical Practice", PLoS Comput Biol 12(6), 2016)
"Inference is to bring about a new thought, which in logic amounts to drawing a conclusion, and more generally involves using what we already know, and what we see or observe, to update prior beliefs. […] Inference is also a leap of sorts, deemed reasonable […] Inference is a basic cognitive act for intelligent minds. If a cognitive agent (a person, an AI system) is not intelligent, it will infer badly. But any system that infers at all must have some basic intelligence, because the very act of using what is known and what is observed to update beliefs is inescapably tied up with what we mean by intelligence. If an AI system is not inferring at all, it doesn’t really deserve to be called AI." (Erik J Larson, "The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do", 2021)
"In statistical inference and machine learning, we often talk about estimates and estimators. Estimates are basically our best guesses regarding some quantities of interest given (finite) data. Estimators are computational devices or procedures that allow us to map between a given (finite) data sample and an estimate of interest." (Aleksander Molak, "Causal Inference and Discovery in Python", 2023)
"The basic goal of causal inference is to estimate the causal effect of one set of variables on another. In most cases, to do it accurately, we need to know which variables we should control for. [...] to accurately control for confounders, we need to go beyond the realm of pure statistics and use the information about the data-generating process, which can be encoded as a (causal) graph. In this sense, the ability to translate between graphical and statistical properties is central to causal inference." (Aleksander Molak, "Causal Inference and Discovery in Python", 2023)
"Statistics is the science, the art, the philosophy, and the technique of making inferences from the particular to the general." (John W Tukey)
"The old rule of trusting the Central Limit Theorem if the sample size is larger than 30 is just that–old. Bootstrap and permutation testing let us more easily do inferences for a wider variety of statistics." (Tim Hesterberg)
More quotes on "Inference" at the-web-of-knowledge.blogspot.com,