11 December 2018

🔭Data Science: Measurement (Just the Quotes)

"Accurate and minute measurement seems to the nonscientific imagination a less lofty and dignified work than looking for something new. But nearly all the grandest discoveries of science have been but the rewards of accurate measurement and patient long contained labor in the minute sifting of numerical results." (William T Kelvin, "Report of the British Association For the Advancement of Science" Vol. 41, 1871)

"It is clear that one who attempts to study precisely things that are changing must have a great deal to do with measures of change." (Charles Cooley, "Observations on the Measure of Change", Journal of the American Statistical Association (21), 1893)

"Nothing is more certain in scientific method than that approximate coincidence alone can be expected. In the measurement of continuous quantity perfect correspondence must be accidental, and should give rise to suspicion rather than to satisfaction." (William S Jevons, "The Principles of Science: A Treatise on Logic and Scientific Method", 1874)

"Physical research by experimental methods is both a broadening and a narrowing field. There are many gaps yet to be filled, data to be accumulated, measurements to be made with great precision, but the limits within which we must work are becoming, at the same time, more and more defined." (Elihu Thomson, "Annual Report of the Board of Regents of the Smithsonian Institution", 1899)

"[…] statistics is the science of the measurement of the social organism, regarded as a whole, in all its manifestations." (Sir Arthur L Bowley, "Elements of Statistics", 1901)

"Statistics may rightly be called the science of averages. […] Great numbers and the averages resulting from them, such as we always obtain in measuring social phenomena, have great inertia. […] It is this constancy of great numbers that makes statistical measurement possible. It is to great numbers that statistical measurement chiefly applies." (Sir Arthur L Bowley, "Elements of Statistics", 1901)

"Just as data gathered by an incompetent observer are worthless - or by a biased observer, unless the bias can be measured and eliminated from the result - so also conclusions obtained from even the best data by one unacquainted with the principles of statistics must be of doubtful value." (William F White, "A Scrap-Book of Elementary Mathematics: Notes, Recreations, Essays", 1908)

"Science begins with measurement and there are some people who cannot be measurers; and just as we distinguish carpenters who can work to this or that traction of an inch of accuracy, so we must distinguish ourselves and our acquaintances as able to observe and record to this or that degree of truthfulness." (John A Thomson, "Introduction to Science", 1911)

"Science depends upon measurement, and things not measurable are therefore excluded, or tend to be excluded, from its attention." (Arthur J Balfour, "Address", 1917)

"Make more measurements than necessary to obtain the result and see to what extent these measurements, which in a certain sense control one another, agree with one another. By looking at how the measures fit to one another one can gain a sort of indication of probability of how precise the single measurements are and within which margins the result reasonably has to be maintained." (Felix Klein, "Elementary Mathematics from a Higher Standpoint" Vol III: "Precision Mathematics and Approximation Mathematics", 1928)

"Search for measurable elements among your phenomena, and then search for relations between these measures of physical quantities." (Alfred N Whitehead, "Science and the Modern World", 1929)

"While it is true that theory often sets difficult, if not impossible tasks for the experiment, it does, on the other hand, often lighten the work of the experimenter by disclosing cogent relationships which make possible the indirect determination of inaccessible quantities and thus render difficult measurements unnecessary." (Georg Joos, "Theoretical Physics", 1934)

"It is important to realize that it is not the one measurement, alone, but its relation to the rest of the sequence that is of interest." (William E Deming, "Statistical Adjustment of Data", 1938)

"Probabilities must be regarded as analogous to the measurement of physical magnitudes; that is to say, they can never be known exactly, but only within certain approximation." (Emile Borel, "Probabilities and Life", 1943)

"A model, like a novel, may resonate with nature, but it is not a ‘real’ thing. Like a novel, a model may be convincing - it may ‘ring true’ if it is consistent with our experience of the natural world. But just as we may wonder how much the characters in a novel are drawn from real life and how much is artifice, we might ask the same of a model: How much is based on observation and measurement of accessible phenomena, how much is convenience? Fundamentally, the reason for modeling is a lack of full access, either in time or space, to the phenomena of interest." (Kenneth Belitz, Science, Vol. 263, 1944)

"Every bit of knowledge we gain and every conclusion we draw about the universe or about any part or feature of it depends finally upon some observation or measurement. Mankind has had again and again the humiliating experience of trusting to intuitive, apparently logical conclusions without observations, and has seen Nature sail by in her radiant chariot of gold in an entirely different direction." (Oliver J Lee, "Measuring Our Universe: From the Inner Atom to Outer Space", 1950)

"Statistics is the fundamental and most important part of inductive logic. It is both an art and a science, and it deals with the collection, the tabulation, the analysis and interpretation of quantitative and qualitative measurements. It is concerned with the classifying and determining of actual attributes as well as the making of estimates and the testing of various hypotheses by which probable, or expected, values are obtained. It is one of the means of carrying on scientific research in order to ascertain the laws of behavior of things - be they animate or inanimate. Statistics is the technique of the Scientific Method." (Bruce D Greenschields & Frank M Weida, "Statistics with Applications to Highway Traffic Analyses", 1952)

"We are committed to the scientific method, and measurement is the foundation of that method; hence we are prone to assume that whatever is measurable must be significant and that whatever cannot be measured may as well be disregarded." (Joseph W Krutch, "Human Nature and the Human Condition", 1959)

"No observations are absolutely trustworthy. In no field of observation can we entirely rule out the possibility that an observation is vitiated by a large measurement or execution error. If a reading is found to lie a very long way from its fellows in a series of replicate observations, there must be a suspicion that the deviation is caused by a blunder or gross error of some kind. [...] One sufficiently erroneous reading can wreck the whole of a statistical analysis, however many observations there are." (Francis J Anscombe, "Rejection of Outliers", Technometrics Vol. 2 (2), 1960)

"Statistics provides a quantitative example of the scientific process usually described qualitatively by saying that scientists observe nature, study the measurements, postulate models to predict new measurements, and validate the model by the success of prediction." (Marshall J Walker, "The Nature of Scientific Thought", 1963)

"This other world is the so-called physical world image; it is merely an intellectual structure. To a certain extent it is arbitrary. It is a kind of model or idealization created in order to avoid the inaccuracy inherent in every measurement and to facilitate exact definition." (Max Planck, "The Philosophy of Physics", 1963)

"Measurement, we have seen, always has an element of error in it. The most exact description or prediction that a scientist can make is still only approximate." (Abraham Kaplan, "The Conduct of Inquiry: Methodology for Behavioral Science", 1964)

"Measurement is the link between mathematics and science." (Brian Ellis, "Basic Concepts of Measurement", 1966)

"The aim of science is not so much to search for truth, or even truths, as to classify our knowledge and to establish relations between observable phenomena in order to be able to predict the future in a certain measure and to explain the sequence of phenomena in relation to ourselves." (Pierre L du Noüy, "Between Knowing and Believing", 1967)

"[…] it is not enough to say: 'There's error in the data and therefore the study must be terribly dubious'. A good critic and data analyst must do more: he or she must also show how the error in the measurement or the analysis affects the inferences made on the basis of that data and analysis." (Edward R Tufte, "Data Analysis for Politics and Policy", 1974)

"Typically, data analysis is messy, and little details clutter it. Not only confounding factors, but also deviant cases, minor problems in measurement, and ambiguous results lead to frustration and discouragement, so that more data are collected than analyzed. Neglecting or hiding the messy details of the data reduces the researcher's chances of discovering something new." (Edward R Tufte, "Data Analysis for Politics and Policy", 1974)

"A mature science, with respect to the matter of errors in variables, is not one that measures its variables without error, for this is impossible. It is, rather, a science which properly manages its errors, controlling their magnitudes and correctly calculating their implications for substantive conclusions." (Otis D Duncan, "Introduction to Structural Equation Models", 1975)

"Crude measurement usually yields misleading, even erroneous conclusions no matter how sophisticated a technique is used." (Henry T Reynolds, "Analysis of Nominal Data", 1977)

"But real-life situations often require us to measure probability in precisely this fashion - from sample to universe. In only rare cases does life replicate games of chance, for which we can determine the probability of an outcome before an event even occurs - a priori […] . In most instances, we have to estimate probabilities from what happened after the fact - a posteriori. The very notion of a posteriori implies experimentation and changing degrees of belief." (Peter L Bernstein, "Against the Gods: The Remarkable Story of Risk", 1996)

"Measurement has meaning only if we can transmit the information without ambiguity to others." (Russell Fox & Max Gorbuny, "The Science of Science", 1997)

"Since the average is a measure of location, it is common to use averages to compare two data sets. The set with the greater average is thought to ‘exceed’ the other set. While such comparisons may be helpful, they must be used with caution. After all, for any given data set, most of the values will not be equal to the average." (Donald J Wheeler, "Understanding Variation: The Key to Managing Chaos" 2nd Ed., 2000)

"First, good statistics are based on more than guessing. [...] Second, good statistics are based on clear, reasonable definitions. Remember, every statistic has to define its subject. Those definitions ought to be clear and made public. [...] Third, good statistics are based on clear, reasonable measures. Again, every statistic involves some sort of measurement; while all measures are imperfect, not all flaws are equally serious. [...] Finally, good statistics are based on good samples." (Joel Best, "Damned Lies and Statistics: Untangling Numbers from the Media, Politicians, and Activists", 2001)

"There are several key issues in the field of statistics that impact our analyses once data have been imported into a software program. These data issues are commonly referred to as the measurement scale of variables, restriction in the range of data, missing data values, outliers, linearity, and nonnormality." (Randall E Schumacker & Richard G Lomax, "A Beginner’s Guide to Structural Equation Modeling" 3rd Ed., 2010)

"There are three possible reasons for [the] absence of predictive power. First, it is possible that the models are misspecified. Second, it is possible that the model’s explanatory factors are measured at too high a level of aggregation [...] Third, [...] the search for statistically significant relationships may not be the strategy best suited for evaluating our model’s ability to explain real world events [...] the lack of predictive power is the result of too much emphasis having been placed on finding statistically significant variables, which may be overdetermined. Statistical significance is generally a flawed way to prune variables in regression models [...] Statistically significant variables may actually degrade the predictive accuracy of a model [...] [By using] models that are constructed on the basis of pruning undertaken with the shears of statistical significance, it is quite possible that we are winnowing our models away from predictive accuracy." (Michael D Ward et al, "The perils of policy by p-value: predicting civil conflicts" Journal of Peace Research 47, 2010)

"GIGO is a famous saying coined by early computer scientists: garbage in, garbage out. At the time, people would blindly put their trust into anything a computer output indicated because the output had the illusion of precision and certainty. If a statistic is composed of a series of poorly defined measures, guesses, misunderstandings, oversimplifications, mismeasurements, or flawed estimates, the resulting conclusion will be flawed." (Daniel J Levitin, "Weaponized Lies", 2017)

"Repeated observations of the same phenomenon do not always produce the same results, due to random noise or error. Sampling errors result when our observations capture unrepresentative circumstances, like measuring rush hour traffic on weekends as well as during the work week. Measurement errors reflect the limits of precision inherent in any sensing device. The notion of signal to noise ratio captures the degree to which a series of observations reflects a quantity of interest as opposed to data variance. As data scientists, we care about changes in the signal instead of the noise, and such variance often makes this problem surprisingly difficult." (Steven S Skiena, "The Data Science Design Manual", 2017)

"It’d be nice to fondly imagine that high-quality statistics simply appear in a spreadsheet somewhere, divine providence from the numerical heavens. Yet any dataset begins with somebody deciding to collect the numbers. What numbers are and aren’t collected, what is and isn’t measured, and who is included or excluded are the result of all-too-human assumptions, preconceptions, and oversights." (Tim Harford, "The Data Detective: Ten easy rules to make sense of statistics", 2020)

"People do care about how they are measured. What can we do about this? If you are in the position to measure something, think about whether measuring it will change people’s behaviors in ways that undermine the value of your results. If you are looking at quantitative indicators that others have compiled, ask yourself: Are these numbers measuring what they are intended to measure? Or are people gaming the system and rendering this measure useless?" (Carl T Bergstrom & Jevin D West, "Calling Bullshit: The Art of Skepticism in a Data-Driven World", 2020)

"Premature enumeration is an equal-opportunity blunder: the most numerate among us may be just as much at risk as those who find their heads spinning at the first mention of a fraction. Indeed, if you’re confident with numbers you may be more prone than most to slicing and dicing, correlating and regressing, normalizing and rebasing, effortlessly manipulating the numbers on the spreadsheet or in the statistical package - without ever realizing that you don’t fully understand what these abstract quantities refer to. Arguably this temptation lay at the root of the last financial crisis: the sophistication of mathematical risk models obscured the question of how, exactly, risks were being measured, and whether those measurements were something you’d really want to bet your global banking system on." (Tim Harford, "The Data Detective: Ten easy rules to make sense of statistics", 2020)

"The whole discipline of statistics is built on measuring or counting things. […] it is important to understand what is being measured or counted, and how. It is surprising how rarely we do this. Over the years, as I found myself trying to lead people out of statistical mazes week after week, I came to realize that many of the problems I encountered were because people had taken a wrong turn right at the start. They had dived into the mathematics of a statistical claim - asking about sampling errors and margins of error, debating if the number is rising or falling, believing, doubting, analyzing, dissecting - without taking the ti- me to understand the first and most obvious fact: What is being measured, or counted? What definition is being used?" (Tim Harford, "The Data Detective: Ten easy rules to make sense of statistics", 2020)

🔭Data Science: Induction (Just the Quotes)

"The Syllogism consists of propositions, propositions consist of words, words are symbols of notions. Therefore if the notions themselves (which is the root of the matter) are confused and over-hastily abstracted from the facts, there can be no firmness in the superstructure. Our only hope therefore lies in a true induction." (Francis Bacon, "The New Organon", 1620)

"As in Mathematics, so in Natural Philosophy, the Investigation of difficult Things by the Method of Analysis, ought ever to precede the Method of Composition. This Analysis consists in making Experiments and Observations, and in drawing general Conclusions from them by Induction, and admitting of no Objections against the Conclusions but such as are taken from Experiments, or other certain Truths." (Sir Isaac Newton, "Opticks", 1704)

"The most important questions of life are, for the most part, really only problems of probability. Strictly speaking one may even say that nearly all our knowledge is problematical; and in the small number of things which we are able to know with certainty, even in the mathematical sciences themselves, induction and analogy, the principal means for discovering truth, are based on probabilities, so that the entire system of human knowledge is connected with this theory." (Pierre-Simon Laplace, "Theorie Analytique des Probabilités", 1812)

"Induction, analogy, hypotheses founded upon facts and rectified continually by new observations, a happy tact given by nature and strengthened by numerous comparisons of its indications with experience, such are the principal means for arriving at truth." (Pierre-Simon Laplace, "A Philosophical Essay on Probabilities", 1814)

"One may even say, strictly speaking, that almost all our knowledge is only probable; and in the small number of things that we are able to know with certainty, in the mathematical sciences themselves, the principal means of arriving at the truth - induction and analogy - are based on probabilities, so that the whole system of human knowledge is tied up with the theory set out in this essay." (Pierre-Simon Laplace, "Philosophical Essay on Probabilities", 1814)

"Such is the tendency of the human mind to speculation, that on the least idea of an analogy between a few phenomena, it leaps forward, as it were, to a cause or law, to the temporary neglect of all the rest; so that, in fact, almost all our principal inductions must be regarded as a series of ascents and descents, and of conclusions from a few cases, verified by trial on many." (Sir John Herschel, "A Preliminary Discourse on the Study of Natural Philosophy" , 1830)

"The principle of deduction is, that things which agree with the same thing agree with one another. The principle of induction is, that in the same circumstances and in the same substances, from the same causes the same effects will follow. The mathematical and metaphysical sciences are founded on deduction; the physical sciences rest on induction." (William Fleming, "A vocabulary of the philosophical sciences", 1857)

"I am convinced that it is impossible to expound the methods of induction in a sound manner, without resting them on the theory of probability. Perfect knowledge alone can give certainty, and in nature perfect knowledge would be infinite knowledge, which is clearly beyond our capacities. We have, therefore, to content ourselves with partial knowledge, - knowledge mingled with ignorance, producing doubt." (William S Jevons, "The Principles of Science: A Treatise on Logic and Scientific Method", 1887)

"In every science, after having analysed the ideas, expressing the more complicated by means of the more simple, one finds a certain number that cannot be reduced among them, and that one can define no further. These are the primitive ideas of the science; it is necessary to acquire them through experience, or through induction; it is impossible to explain them by deduction." (Giuseppe Peano, "Notations de Logique Mathématique", 1894)

"If men of science owe anything to us, we may learn much from them that is essential. For they can show how to test proof, how to secure fulness and soundness in induction, how to restrain and to employ with safety hypothesis and analogy." (Lord John Acton, [Lecture] "The Study of History", 1895)

"A system is a set of objects compromising all that stands to one another in a group of connected relations. Induction according to ordinary logic rises from the contemplation of a sample of a class to that of a whole class; but according to the logic of relatives it rises from the contemplation of a fragment of a system to the envisagement of the complete system." (Charles S Peirce, "Cambridge Lectures on Reasoning and the Logic of Things: Detached Ideas on Vitally Important Topics", 1898)

"Induction applied to the physical sciences is always uncertain, because it rests on the belief in a general order of the universe, an order outside of us." (Henri Poincaré, "Science and Hypothesis", 1901)

"To say that observations of the past are certain, whereas predictions are merely probable, is not the ultimate answer to the question of induction; it is only a sort of intermediate answer, which is incomplete unless a theory of probability is developed that explains what we should mean by ‘probable’ and on what ground we can assert probabilities." (Hans Reichenbach, "The Rise of Scientific Philosophy", 1951)

"[…] the human reason discovers new relations between things not by deduction, but by that unpredictable blend of speculation and insight […] induction, which - like other forms of imagination - cannot be formalized." (Jacob Bronowski, "The Reach of Imagination", 1967)

"The word ‘induction’ has two essentially different meanings. Scientific induction is a process by which scientists make observations of particular cases, such as noticing that some crows are black, then leap to the universal conclusion that all crows are black. The conclusion is never certain. There is always the possibility that at least one unobserved crow is not black." (Martin Gardner, "Aha! Insight", 1978)

"Science, since people must do it, is a socially embedded activity. It progresses by hunch, vision, and intuition. Much of its change through time does not record a closer approach to absolute truth, but the alteration of cultural contexts that influence it so strongly. Facts are not pure and unsullied bits of information; culture also influences what we see and how we see it. Theories, moreover, are not inexorable inductions from facts. The most creative theories are often imaginative visions imposed upon facts; the source of imagination is also strongly cultural." (Stephen J Gould, "The Mismeasure of Man", 1980)

"All great theories are expansive, and all notions so rich in scope and implication are underpinned by visions about the nature of things. You may call these visions ‘philosophy’, or ‘metaphor’, or ‘organizing principle’, but one thing they are surely not - they are not simple inductions from observed facts of the natural world." (Stephen J Gould, "Time’s Arrow, Time’s Cycle", 1987)

"Model building is the art of selecting those aspects of a process that are relevant to the question being asked. As with any art, this selection is guided by taste, elegance, and metaphor; it is a matter of induction, rather than deduction. High science depends on this art." (John H Holland," Hidden Order: How Adaptation Builds Complexity", 1995)

More quotes on "Induction" at the-web-of-knowledge.blogspot.com

10 December 2018

🔭Data Science: Generalization (Just the Quotes)

"General assertions, like general truths, are not always applicable to individual cases [...]" (Letitia E Landon, "Romance and Reality", 1831)

"Every science begins by accumulating observations, and presently generalizes these empirically; but only when it reaches the stage at which its empirical generalizations are included in a rational generalization does it become developed science." (Herbert Spencer, "The Data of Ethics", 1879)

"Let us notice first of all, that every generalization implies in some measure the belief in the unity and simplicity of nature." (Jules H Poincaré, "Science and Hypothesis", 1905)

"We lay down a fundamental principle of generalization by abstraction: The existence of analogies between central features of various theories implies the existence of a general theory which underlies the particular theories and unifies them with respect to those central features." (Eliakim H Moore, "Introduction to a Form of General Analysis", 1910)

"Sometimes the probability in favor of a generalization is enormous, but the infinite probability of certainty is never reached." (William Dampier-Whetham, "Science and the Human Mind", 1912)

"Generalization is the golden thread which binds many facts into one simple description." (Joseph W Mellor, "A Comprehensive Treatise on Inorganic and Theoretical Chemistry", 1922)

"The former distrust of specialization has been supplanted by its opposite, a distrust of generalization. Not only has man become a specialist in practice, he is being taught that special facts represent the highest form of knowledge." (Richard Weaver, "Ideas have Consequences", 1948)

"The transition from a paradigm to a new one from which a new tradition of normal science can emerge is far from a cumulative process, one achieved by an articulation or extension of the old paradigm. Rather it is a reconstruction of the field from new fundamentals, a reconstruction that changes some of the field’s most elementary theoretical generalizations as well as many of its paradigm methods and applications." (Thomas S Kuhn, "The Structure of Scientific Revolutions", 1962)

"Theories are generalizations and unifications, and as such they cannot logically follow only from our experiences of a few particular events." (John T Davies, The Scientific Approach, 1965)

"At each level of complexity, entirely new properties appear. [And] at each stage, entirely new laws, concepts, and generalizations are necessary, requiring inspiration and creativity to just as great a degree as in the previous one." (Herb Anderson, 1972)

"Science uses the senses but does not enjoy them; finally buries them under theory, abstraction, mathematical generalization." (Theodore Roszak, "Where the Wasteland Ends", 1972)

"Almost all efforts at data analysis seek, at some point, to generalize the results and extend the reach of the conclusions beyond a particular set of data. The inferential leap may be from past experiences to future ones, from a sample of a population to the whole population, or from a narrow range of a variable to a wider range. The real difficulty is in deciding when the extrapolation beyond the range of the variables is warranted and when it is merely naive. As usual, it is largely a matter of substantive judgment - or, as it is sometimes more delicately put, a matter of 'a priori nonstatistical considerations'." (Edward R Tufte, "Data Analysis for Politics and Policy", 1974)

"A single observation that is inconsistent with some generalization points to the falsehood of the generalization, and thereby 'points to itself'." (Ian Hacking, "The Emergence Of Probability", 1975)

"The word generalization in literature usually means covering too much territory too thinly to be persuasive, let alone convincing. In science, however, a generalization means a principle that has been found to hold true in every special case." (Buckminster Fuller, "Synergetics: Explorations in the Geometry of Thinking", 1975)

"Prediction can never be absolutely valid and therefore science can never prove some generalization or even test a single descriptive statement and in that way arrive at final truth." (Gregory Bateson, "Mind and Nature, A necessary unity", 1979)

"There are those who try to generalize, synthesize, and build models, and there are those who believe nothing and constantly call for more data. The tension between these two groups is a healthy one; science develops mainly because of the model builders, yet they need the second group to keep them honest." (Andrew Miall, "Principles of Sedimentary Basin Analysis", 1984)

"We generalize from one situation to another not because we cannot tell the difference between the two situations but because we judge that they are likely to belong to a set of situations having the same consequence." (Roger N Shepard, "Toward a Universal Law of Generalization for Psychological Science", Science 237 (4820), 1987)

"Searching for patterns is a way of thinking that is essential for making generalizations, seeing relationships, and understanding the logic and order of mathematics. Functions evolve from the investigation of patterns and unify the various aspects of mathematics." (Marilyn Burns, "About Teaching Mathematics: A K–8 Resource", 1992)

"Generalization is the process of matching new, unknown input data with the problem knowledge in order to obtain the best possible solution, or one close to it. Generalization means reacting properly to new situations, for example, recognizing new images, or classifying new objects and situations. Generalization can also be described as a transition from a particular object description to a general concept description. This is a major characteristic of all intelligent systems." (Nikola K Kasabov, "Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering", 1996)

"Generalization is a core concept in machine learning; to be useful, machine-learning algorithms can’t just memorize the past, they must learn from the past. Generalization is the ability to respond properly to new situations based on experience from past situations." (Prashant Natarajan et al, "Demystifying Big Data and Machine Learning for Healthcare", 2017)

"In machine learning, a model is defined as a function, and we describe the learning function from the training data as inductive learning. Generalization refers to how well the concepts are learned by the model by applying them to data not seen before. The goal of a good machine-learning model is to reduce generalization errors and thus make good predictions on data that the model has never seen." (Umesh R Hodeghatta & Umesha Nayak, "Business Analytics Using R: A Practical Approach", 2017)

"But law is no explanation of anything; law is simply a generalization, a category of facts. Law is neither a cause, nor a reason, nor a power, nor a coercive force. It is nothing but a general formula, a statistical table." (Florence Nightingale)

"Facts are facts and it is from facts that we make our generalizations, from the little to the great, and it is wrong for a stranger to the facts he handles to generalize from them to other generalizations." (Charles Schuchert)

"Generalization is necessary to the advancement of knowledge; but particularity is indispensable to the creations of the imagination." (Thomas B Macaulay)

"Generalizations would be excellent things if we could be persuaded to part with them as easily as we formed them. They might then be used like the shifting hypotheses in certain operations of exact science, by help of which we may gradually approximate nearer and nearer to the truth." (Henry De la Beche)

"No one sees further into a generalization than his own knowledge of detail extends." (William James)

🔭Data Science: Analogy (Just the Quotes)

"In order to supply the defects of experience, we will have recourse to the probable conjectures of analogy, conclusions which we will bequeath to our posterity to be ascertained by new observations, which, if we augur rightly, will serve to establish our theory and to carry it gradually nearer to absolute certainty." (Johann H Lambert, "The System of the World", 1800)

"Simplicity and precision ought to be the characteristics of a scientific nomenclature: words should signify things, or the analogies of things, and not opinions." (Sir Humphry Davy, Elements of Chemical Philosophy", 1812)

"The most important questions of life are, for the most part, really only problems of probability. Strictly speaking one may even say that nearly all our knowledge is problematical; and in the small number of things which we are able to know with certainty, even in the mathematical sciences themselves, induction and analogy, the principal means for discovering truth, are based on probabilities, so that the entire system of human knowledge is connected with this theory." (Pierre-Simon Laplace, "Theorie Analytique des Probabilités", 1812)

"Induction, analogy, hypotheses founded upon facts and rectified continually by new observations, a happy tact given by nature and strengthened by numerous comparisons of its indications with experience, such are the principal means for arriving at truth." (Pierre-Simon Laplace, "A Philosophical Essay on Probabilities", 1814)


"One may even say, strictly speaking, that almost all our knowledge is only probable; and in the small number of things that we are able to know with certainty, in the mathematical sciences themselves, the principal means of arriving at the truth - induction and analogy - are based on probabilities, so that the whole system of human knowledge is tied up with the theory set out in this essay." (Pierre-Simon Laplace, "Philosophical Essay on Probabilities", 1814)


"Such is the tendency of the human mind to speculation, that on the least idea of an analogy between a few phenomena, it leaps forward, as it were, to a cause or law, to the temporary neglect of all the rest; so that, in fact, almost all our principal inductions must be regarded as a series of ascents and descents, and of conclusions from a few cases, verified by trial on many." (Sir John Herschel, "A Preliminary Discourse on the Study of Natural Philosophy" , 1830) 

"Science is nothing but the finding of analogy, identity, in the most remote parts." (Ralph W Emerson, 1837)


"To reason from analogy is often dangerous, but to illustrate by a fanciful analogy is sometimes a means by which we light an idea, as it were, into the understanding of another." (Anna B Jameson, "Studies, Stories, and Memoirs", 1838)


"All perception of truth is the detection of an analogy [...]" (Henry D Thoreau, 1851)

"Reasoning from analogy is often most plausible and most deceptive." (Charles Simmons, "A Laconic Manual and Brief Remarker", 1852)

"Summing up, then, it would seem as if the mind of the great discoverer must combine contradictory attributes. He must be fertile in theories and hypotheses, and yet full of facts and precise results of experience. He must entertain the feeblest analogies, and the merest guesses at truth, and yet he must hold them as worthless till they are verified in experiment. When there are any grounds of probability he must hold tenaciously to an old opinion, and yet he must be prepared at any moment to relinquish it when a clearly contradictory fact is encountered." (William S Jevons, "The Principles of Science: A Treatise on Logic and Scientific Method", 1874)

"Most surprising and far-reaching analogies revealed themselves between apparently quite disparate natural processes. It seemed that nature had built the most various things on exactly the same pattern; or, in the dry words of the analyst, the same differential equations hold for the most various phenomena. (Ludwig Boltzmann, "On the methods of theoretical physics", 1892)

"If men of science owe anything to us, we may learn much from them that is essential. For they can show how to test proof, how to secure fulness and soundness in induction, how to restrain and to employ with safety hypothesis and analogy." (Lord John Acton, [Lecture] "The Study of History", 1895)

"The existence of analogies between central features of various theories implies the existence of a general theory which underlies the particular theories and unifies them with respect to those central features." (Eliakim H Moore, "Introduction to a Form of General Analysis", 1910)

"[…] analogies are not ‘aids’ to the establishment of theories; they are an utterly essential part of theories, without which theories would be completely valueless and unworthy of the name. It is often suggested that the analogy leads to the formulation of the theory, but that once the theory is formulated the analogy has served its purpose and may be removed or forgotten. Such a suggestion is absolutely false and perniciously misleading." (Norman R Campbell, "Physics: The Elements", 1920)

"Analogies are useful for analysis in unexplored fields. By means of analogies an unfamiliar system may be compared with one that is better known. The relations and actions are more easily visualized, the mathematics more readily applied, and the analytical solutions more readily obtained in the familiar system." (Harry F Olson, "Dynamical Analogies", 1943)

"This, however, is very speculative; the point of interest for our present enquiry is that physical reality is built up, apparently, from a few fundamental types of units whose properties determine many of the properties of the most complicated phenomena, and this seems to afford a sufficient explanation of the emergence of analogies between mechanisms and similarities of relation-structure among these combinations without the necessity of any theory of objective universals." (Kenneth Craik, "The Nature of Explanation", 1943)

"[…] one of the main functions of an analogy or model is to suggest extensions of the theory by considering extensions of the analogy, since more is known about the analogy than is known about the subject matter of the theory itself [...]" (Mary B Hesse, "Operational Definition and Analogy in Physical Theories", British Journal for the Philosophy of Science 2 (8), 1952)

"The methods of science may be described as the discovery of laws, the explanation of laws by theories, and the testing of theories by new observations. A good analogy is that of the jigsaw puzzle, for which the laws are the individual pieces, the theories local patterns suggested by a few pieces, and the tests the completion of these patterns with pieces previously unconsidered." (Edwin P Hubble, "The Nature of Science and Other Lectures", 1954)

"An analogy is a relationship between two entities, processes, or what you will, which allows inferences to be made about one of the things, usually that about which we know least, on the basis of what we know about the other. […] The art of using analogy is to balance up what we know of the likenesses against the unlikenesses between two things, and then on the basis of this balance make an inference as to what is called the neutral analogy, that about which we do not know." (Rom Harré," The Philosophies of Science" , 1972) 

"Catastrophe Theory is-quite likely-the first coherent attempt (since Aristotelian logic) to give a theory on analogy. When narrow-minded scientists object to Catastrophe Theory that it gives no more than analogies, or metaphors, they do not realise that they are stating the proper aim of Catastrophe Theory, which is to classify all possible types of analogous situations." (René F Thom," La Théorie des catastrophes: État présent et perspective", 1977)

"The scientific discovery appears first as the hypothesis of an analogy; and science tends to become independent of the hypothesis." (William K Clifford, "Lectures and Essays", 1879)

"Analogies, metaphors, and emblems are the threads by which the mind holds on to the world even when, absentmindedly, it has lost direct contact with it, and they guarantee the unity of human experience. Moreover, in the thinking process itself they serve as models to give us our bearings lest we stagger blindly among experiences that our bodily senses with their relative certainty of knowledge cannot guide us through." (Hannah Arendt, "The Life of the Mind", 1981)

"There are many things you can do with problems besides solving them. First you must define them, pose them. But then of course you can also refi ne them, depose them, or expose them or even dissolve them! A given problem may send you looking for analogies, and some of these may lead you astray, suggesting new and different problems, related or not to the original. Ends and means can get reversed. You had a goal, but the means you found didn’t lead to it, so you found a new goal they did lead to. It’s called play. Creative mathematicians play a lot; around any problem really interesting they develop a whole cluster of analogies, of playthings." (David Hawkins, "The Spirit of Play", Los Alamos Science, 1987)

"A scientific problem can be illuminated by the discovery of a profound analogy, and a mundane problem can be solved in a similar way." (Philip Johnson-Laird, "The Computer and the Mind", 1988)

"Mathematics is the study of analogies between analogies. All science is. Scientists want to show that things that don’t look alike are really the same. That is one of their innermost Freudian motivations. In fact, that is what we mean by understanding." (Gian-Carlo Rota, "Indiscrete Thoughts", 1997)

“What cognitive capabilities underlie our fundamental human achievements? Although a complete answer remains elusive, one basic component is a special kind of symbolic activity - the ability to pick out patterns, to identify recurrences of these patterns despite variation in the elements that compose them, to form concepts that abstract and reify these patterns, and to express these concepts in language. Analogy, in its most general sense, is this ability to think about relational patterns.” (Keith Holyoak et al, “Introduction: The Place of Analogy in Cognition”, 2001)

"By bringing together what we know and what we don't know through analogy, metaphorical thinking strikes the spark that ignites discovery." (James Geary, [TED talk] 2009)

"The human mind delights in finding pattern - so much so that we often mistake coincidence or forced analogy for profound meaning. No other habit of thought lies so deeply within the soul of a small creature trying to make sense of a complex world not constructed for it." (Stephen J Gould, "The Flamingo's Smile: Reflections in Natural History", 2010)

More quotes on "Analogy" at the-web-of-knowledge.blogspot.com

09 December 2018

🔭Data Science: Failure (Just the Quotes)

"Every detection of what is false directs us towards what is true: every trial exhausts some tempting form of error. Not only so; but scarcely any attempt is entirely a failure; scarcely any theory, the result of steady thought, is altogether false; no tempting form of error is without some latent charm derived from truth." (William Whewell, "Lectures on the History of Moral Philosophy in England", 1852)

"Scarcely any attempt is entirely a failure; scarcely any theory, the result of steady thought, is altogether false; no tempting form of Error is without some latent charm derived from Truth." (William Whewell, "Lectures on the History of Moral Philosophy in England", 1852)

"We learn wisdom from failure much more than from success. We often discover what will do, by finding out what will not do; and probably he who never made a mistake never made a discovery." (Samuel Smiles, "Facilities and Difficulties", 1859)

"[…] the statistical prediction of the future from the past cannot be generally valid, because whatever is future to any given past, is in tum past for some future. That is, whoever continually revises his judgment of the probability of a statistical generalization by its successively observed verifications and failures, cannot fail to make more successful predictions than if he should disregard the past in his anticipation of the future. This might be called the ‘Principle of statistical accumulation’." (Clarence I Lewis, "Mind and the World-Order: Outline of a Theory of Knowledge", 1929)

"Science condemns itself to failure when, yielding to the infatuation of the serious, it aspires to attain being, to contain it, and to possess it; but it finds its truth if it considers itself as a free engagement of thought in the given, aiming, at each discovery, not at fusion with the thing, but at the possibility of new discoveries; what the mind then projects is the concrete accomplishment of its freedom." (Simone de Beauvoir, "The Ethics of Ambiguity", 1947)

"Common sense […] may be thought of as a series of concepts and conceptual schemes which have proved highly satisfactory for the practical uses of mankind. Some of those concepts and conceptual schemes were carried over into science with only a little pruning and whittling and for a long time proved useful. As the recent revolutions in physics indicate, however, many errors can be made by failure to examine carefully just how common sense ideas should be defined in terms of what the experimenter plans to do." (James B Conant, "Science and Common Sense", 1951)

"Catastrophes are often stimulated by the failure to feel the emergence of a domain, and so what cannot be felt in the imagination is experienced as embodied sensation in the catastrophe. (William I Thompson, "Gaia, a Way of Knowing: Political Implications of the New Biology", 1987)

"What about confusing clutter? Information overload? Doesn't data have to be ‘boiled down’ and  ‘simplified’? These common questions miss the point, for the quantity of detail is an issue completely separate from the difficulty of reading. Clutter and confusion are failures of design, not attributes of information." (Edward R Tufte, "Envisioning Information", 1990)

"When a system is predictable, it is already performing as consistently as possible. Looking for assignable causes is a waste of time and effort. Instead, you can meaningfully work on making improvements and modifications to the process. When a system is unpredictable, it will be futile to try and improve or modify the process. Instead you must seek to identify the assignable causes which affect the system. The failure to distinguish between these two different courses of action is a major source of confusion and wasted effort in business today." (Donald J Wheeler, "Understanding Variation: The Key to Managing Chaos" 2nd Ed., 2000)

"[…] in cybernetics, control is seen not as a function of one agent over something else, but as residing within circular causal networks, maintaining stabilities in a system. Circularities have no beginning, no end and no asymmetries. The control metaphor of communication, by contrast, punctuates this circularity unevenly. It privileges the conceptions and actions of a designated controller by distinguishing between messages sent in order to cause desired effects and feedback that informs the controller of successes or failures." (Klaus Krippendorff, "On Communicating: Otherness, Meaning, and Information", 2009)

"To get a true understanding of the work of mathematicians, and the need for proof, it is important for you to experiment with your own intuitions, to see where they lead, and then to experience the same failures and sense of accomplishment that mathematicians experienced when they obtained the correct results. Through this, it should become clear that, when doing any level of mathematics, the roads to correct solutions are rarely straight, can be quite different, and take patience and persistence to explore." (Alan Sultan & Alice F Artzt, "The Mathematics that every Secondary School Math Teacher Needs to Know", 2011)

"A very different - and very incorrect - argument is that successes must be balanced by failures (and failures by successes) so that things average out. Every coin flip that lands heads makes tails more likely. Every red at roulette makes black more likely. […] These beliefs are all incorrect. Good luck will certainly not continue indefinitely, but do not assume that good luck makes bad luck more likely, or vice versa." (Gary Smith, "Standard Deviations", 2014)

"We are seduced by patterns and we want explanations for these patterns. When we see a string of successes, we think that a hot hand has made success more likely. If we see a string of failures, we think a cold hand has made failure more likely. It is easy to dismiss such theories when they involve coin flips, but it is not so easy with humans. We surely have emotions and ailments that can cause our abilities to go up and down. The question is whether these fluctuations are important or trivial." (Gary Smith, "Standard Deviations", 2014)

"Although cascading failures may appear random and unpredictable, they follow reproducible laws that can be quantified and even predicted using the tools of network science. First, to avoid damaging cascades, we must understand the structure of the network on which the cascade propagates. Second, we must be able to model the dynamical processes taking place on these networks, like the flow of electricity. Finally, we need to uncover how the interplay between the network structure and dynamics affects the robustness of the whole system." (Albert-László Barabási, "Network Science", 2016)

More quotes in "Failure" at the-web-of-knowledge.blogspot.com.

🔭Data Science: Distributions (Just the Quotes)

"If the number of experiments be very large, we may have precise information as to the value of the mean, but if our sample be small, we have two sources of uncertainty: (I) owing to the 'error of random sampling' the mean of our series of experiments deviates more or less widely from the mean of the population, and (2) the sample is not sufficiently large to determine what is the law of distribution of individuals." (William S Gosset, "The Probable Error of a Mean", Biometrika, 1908)

"We know not to what are due the accidental errors, and precisely because we do not know, we are aware they obey the law of Gauss. Such is the paradox." (Henri Poincaré, "The Foundations of Science", 1913)

"The problems which arise in the reduction of data may thus conveniently be divided into three types: (i) Problems of Specification, which arise in the choice of the mathematical form of the population. (ii) When a specification has been obtained, problems of Estimation arise. These involve the choice among the methods of calculating, from our sample, statistics fit to estimate the unknow n parameters of the population. (iii) Problems of Distribution include the mathematical deduction of the exact nature of the distributions in random samples of our estimates of the parameters, and of other statistics designed to test the validity of our specification (tests of Goodness of Fit)." (Sir Ronald A Fisher, "Statistical Methods for Research Workers", 1925)

"An inference, if it is to have scientific value, must constitute a prediction concerning future data. If the inference is to be made purely with the help of the distribution theory of statistics, the experiments that constitute evidence for the inference must arise from a state of statistical control; until that state is reached, there is no universe, normal or otherwise, and the statistician’s calculations by themselves are an illusion if not a delusion. The fact is that when distribution theory is not applicable for lack of control, any inference, statistical or otherwise, is little better than a conjecture. The state of statistical control is therefore the goal of all experimentation. (William E Deming, "Statistical Method from the Viewpoint of Quality Control", 1939)

"Normality is a myth; there never was, and never will be, a normal distribution. This is an overstatement from the practical point of view, but it represents a safer initial mental attitude than any in fashion during the past two decades." (Roy C Geary, "Testing for Normality", Biometrika Vol. 34, 1947)

"A good estimator will be unbiased and will converge more and more closely (in the long run) on the true value as the sample size increases. Such estimators are known as consistent. But consistency is not all we can ask of an estimator. In estimating the central tendency of a distribution, we are not confined to using the arithmetic mean; we might just as well use the median. Given a choice of possible estimators, all consistent in the sense just defined, we can see whether there is anything which recommends the choice of one rather than another. The thing which at once suggests itself is the sampling variance of the different estimators, since an estimator with a small sampling variance will be less likely to differ from the true value by a large amount than an estimator whose sampling variance is large." (Michael J Moroney, "Facts from Figures", 1951)

"Some distributions [...] are symmetrical about their central value. Other distributions have marked asymmetry and are said to be skew. Skew distributions are divided into two types. If the 'tail' of the distribution reaches out into the larger values of the variate, the distribution is said to show positive skewness; if the tail extends towards the smaller values of the variate, the distribution is called negatively skew." (Michael J Moroney, "Facts from Figures", 1951)

"[A] sequence is random if it has every property that is shared by all infinite sequences of independent samples of random variables from the uniform distribution." (Joel N Franklin, 1962)

"Mathematical statistics provides an exceptionally clear example of the relationship between mathematics and the external world. The external world provides the experimentally measured distribution curve; mathematics provides the equation (the mathematical model) that corresponds to the empirical curve. The statistician may be guided by a thought experiment in finding the corresponding equation." (Marshall J Walker, "The Nature of Scientific Thought", 1963)

"Pencil and paper for construction of distributions, scatter diagrams, and run-charts to compare small groups and to detect trends are more efficient methods of estimation than statistical inference that depends on variances and standard errors, as the simple techniques preserve the information in the original data." (William E Deming, "On Probability as Basis for Action" American Statistician Vol. 29 (4), 1975)

"When the statistician looks at the outside world, he cannot, for example, rely on finding errors that are independently and identically distributed in approximately normal distributions. In particular, most economic and business data are collected serially and can be expected, therefore, to be heavily serially dependent. So is much of the data collected from the automatic instruments which are becoming so common in laboratories these days. Analysis of such data, using procedures such as standard regression analysis which assume independence, can lead to gross error. Furthermore, the possibility of contamination of the error distribution by outliers is always present and has recently received much attention. More generally, real data sets, especially if they are long, usually show inhomogeneity in the mean, the variance, or both, and it is not always possible to randomize." (George E P Box, "Some Problems of Statistics and Everyday Life", Journal of the American Statistical Association, Vol. 74 (365), 1979)

"At the heart of probabilistic statistical analysis is the assumption that a set of data arises as a sample from a distribution in some class of probability distributions. The reasons for making distributional assumptions about data are several. First, if we can describe a set of data as a sample from a certain theoretical distribution, say a normal distribution (also called a Gaussian distribution), then we can achieve a valuable compactness of description for the data. For example, in the normal case, the data can be succinctly described by giving the mean and standard deviation and stating that the empirical (sample) distribution of the data is well approximated by the normal distribution. A second reason for distributional assumptions is that they can lead to useful statistical procedures. For example, the assumption that data are generated by normal probability distributions leads to the analysis of variance and least squares. Similarly, much of the theory and technology of reliability assumes samples from the exponential, Weibull, or gamma distribution. A third reason is that the assumptions allow us to characterize the sampling distribution of statistics computed during the analysis and thereby make inferences and probabilistic statements about unknown aspects of the underlying distribution. For example, assuming the data are a sample from a normal distribution allows us to use the t-distribution to form confidence intervals for the mean of the theoretical distribution. A fourth reason for distributional assumptions is that understanding the distribution of a set of data can sometimes shed light on the physical mechanisms involved in generating the data." (John M Chambers et al, "Graphical Methods for Data Analysis", 1983)

"Equal variability is not always achieved in plots. For instance, if the theoretical distribution for a probability plot has a density that drops off gradually to zero in the tails (as the normal density does), then the variability of the data in the tails of the probability plot is greater than in the center. Another example is provided by the histogram. Since the height of any one bar has a binomial distribution, the standard deviation of the height is approximately proportional to the square root of the expected height; hence, the variability of the longer bars is greater." (John M Chambers et al, "Graphical Methods for Data Analysis", 1983)

"Symmetry is also important because it can simplify our thinking about the distribution of a set of data. If we can establish that the data are (approximately) symmetric, then we no longer need to describe the  shapes of both the right and left halves. (We might even combine the information from the two sides and have effectively twice as much data for viewing the distributional shape.) Finally, symmetry is important because many statistical procedures are designed for, and work best on, symmetric data." (John M Chambers et al, "Graphical Methods for Data Analysis", 1983)

"We will use the convenient expression 'chosen at random' to mean that the probabilities of the events in the sample space are all the same unless some modifying words are near to the words 'at random'. Usually we will compute the probability of the outcome based on the uniform probability model since that is very common in modeling simple situations. However, a uniform distribution does not imply that it comes from a random source; […]" (Richard W Hamming, "The Art of Probability for Scientists and Engineers", 1991)

"Data that are skewed toward large values occur commonly. Any set of positive measurements is a candidate. Nature just works like that. In fact, if data consisting of positive numbers range over several powers of ten, it is almost a guarantee that they will be skewed. Skewness creates many problems. There are visualization problems. A large fraction of the data are squashed into small regions of graphs, and visual assessment of the data degrades. There are characterization problems. Skewed distributions tend to be more complicated than symmetric ones; for example, there is no unique notion of location and the median and mean measure different aspects of the distribution. There are problems in carrying out probabilistic methods. The distribution of skewed data is not well approximated by the normal, so the many probabilistic methods based on an assumption of a normal distribution cannot be applied." (William S Cleveland, "Visualizing Data", 1993)

"Fitting data means finding mathematical descriptions of structure in the data. An additive shift is a structural property of univariate data in which distributions differ only in location and not in spread or shape. […] The process of identifying a structure in data and then fitting the structure to produce residuals that have the same distribution lies at the heart of statistical analysis. Such homogeneous residuals can be pooled, which increases the power of the description of the variation in the data." (William S Cleveland, "Visualizing Data", 1993)

"Many good things happen when data distributions are well approximated by the normal. First, the question of whether the shifts among the distributions are additive becomes the question of whether the distributions have the same standard deviation; if so, the shifts are additive. […] A second good happening is that methods of fitting and methods of probabilistic inference, to be taken up shortly, are typically simple and on well understood ground. […] A third good thing is that the description of the data distribution is more parsimonious." (William S Cleveland, "Visualizing Data", 1993)

"Probabilistic inference is the classical paradigm for data analysis in science and technology. It rests on a foundation of randomness; variation in data is ascribed to a random process in which nature generates data according to a probability distribution. This leads to a codification of uncertainly by confidence intervals and hypothesis tests." (William S Cleveland, "Visualizing Data", 1993)

"When distributions are compared, the goal is to understand how the distributions shift in going from one data set to the next. […] The most effective way to investigate the shifts of distributions is to compare corresponding quantiles." (William S Cleveland, "Visualizing Data", 1993)

"When the distributions of two or more groups of univariate data are skewed, it is common to have the spread increase monotonically with location. This behavior is monotone spread. Strictly speaking, monotone spread includes the case where the spread decreases monotonically with location, but such a decrease is much less common for raw data. Monotone spread, as with skewness, adds to the difficulty of data analysis. For example, it means that we cannot fit just location estimates to produce homogeneous residuals; we must fit spread estimates as well. Furthermore, the distributions cannot be compared by a number of standard methods of probabilistic inference that are based on an assumption of equal spreads; the standard t-test is one example. Fortunately, remedies for skewness can cure monotone spread as well." (William S Cleveland, "Visualizing Data", 1993)

"A normal distribution is most unlikely, although not impossible, when the observations are dependent upon one another - that is, when the probability of one event is determined by a preceding event. The observations will fail to distribute themselves symmetrically around the mean." (Peter L Bernstein, "Against the Gods: The Remarkable Story of Risk", 1996)

"Linear regression assumes that in the population a normal distribution of error values around the predicted Y is associated with each X value, and that the dispersion of the error values for each X value is the same. The assumptions imply normal and similarly dispersed error distributions." (Fred C Pampel, "Linear Regression: A primer", 2000)

"The principle of maximum entropy is employed for estimating unknown probabilities (which cannot be derived deductively) on the basis of the available information. According to this principle, the estimated probability distribution should be such that its entropy reaches maximum within the constraints of the situation, i.e., constraints that represent the available information. This principle thus guarantees that no more information is used in estimating the probabilities than available." (George J Klir & Doug Elias, "Architecture of Systems Problem Solving" 2nd Ed, 2003) 

"The principle of minimum entropy is employed in the formulation of resolution forms and related problems. According to this principle, the entropy of the estimated probability distribution, conditioned by a particular classification of the given events (e.g., states of the variable involved), is minimum subject to the constraints of the situation. This principle thus guarantees that all available information is used, as much as possible within the given constraints (e.g., required number of states), in the estimation of the unknown probabilities." (George J Klir & Doug Elias, "Architecture of Systems Problem Solving" 2nd Ed, 2003)

"In the laws of probability theory, likelihood distributions are fixed properties of a hypothesis. In the art of rationality, to explain is to anticipate. To anticipate is to explain." (Eliezer S. Yudkowsky, "A Technical Explanation of Technical Explanation", 2005)

"The central limit theorem says that, under conditions almost always satisfied in the real world of experimentation, the distribution of such a linear function of errors will tend to normality as the number of its components becomes large. The tendency to normality occurs almost regardless of the individual distributions of the component errors. An important proviso is that several sources of error must make important contributions to the overall error and that no particular source of error dominate the rest." (George E P Box et al, "Statistics for Experimenters: Design, discovery, and innovation" 2nd Ed., 2005)

"Two things explain the importance of the normal distribution: (1) The central limit effect that produces a tendency for real error distributions to be 'normal like'. (2) The robustness to nonnormality of some common statistical procedures, where 'robustness' means insensitivity to deviations from theoretical normality." (George E P Box et al, "Statistics for Experimenters: Design, discovery, and innovation" 2nd Ed., 2005)

"For some scientific data the true value cannot be given by a constant or some straightforward mathematical function but by a probability distribution or an expectation value. Such data are called probabilistic. Even so, their true value does not change with time or place, making them distinctly different from  most statistical data of everyday life." (Manfred Drosg, "Dealing with Uncertainties: A Guide to Error Analysis", 2007)

"In error analysis the so-called 'chi-squared' is a measure of the agreement between the uncorrelated internal and the external uncertainties of a measured functional relation. The simplest such relation would be time independence. Theory of the chi-squared requires that the uncertainties be normally distributed. Nevertheless, it was found that the test can be applied to most probability distributions encountered in practice." (Manfred Drosg, "Dealing with Uncertainties: A Guide to Error Analysis", 2007)

"To fulfill the requirements of the theory underlying uncertainties, variables with random uncertainties must be independent of each other and identically distributed. In the limiting case of an infinite number of such variables, these are called normally distributed. However, one usually speaks of normally distributed variables even if their number is finite." (Manfred Drosg, "Dealing with Uncertainties: A Guide to Error Analysis", 2007)

"Traditional statistics is strong in devising ways of describing data and inferring distributional parameters from sample. Causal inference requires two additional ingredients: a science-friendly language for articulating causal knowledge, and a mathematical machinery for processing that knowledge, combining it with data and drawing new causal conclusions about a phenomenon." (Judea Pearl, "Causal inference in statistics: An overview", Statistics Surveys 3, 2009)

"The elements of this cloud of uncertainty (the set of all possible errors) can be described in terms of probability. The center of the cloud is the number zero, and elements of the cloud that are close to zero are more probable than elements that are far away from that center. We can be more precise in this definition by defining the cloud of uncertainty in terms of a mathematical function, called the probability distribution." (David S Salsburg, "Errors, Blunders, and Lies: How to Tell the Difference", 2017)

"It is not enough to give a single summary for a distribution - we need to have an idea of the spread, sometimes known as the variability. [...] The range is a natural choice, but is clearly very sensitive to extreme values [...] In contrast the inter-quartile range (IQR) is unaffected by extremes. This is the distance between the 25th and 75th percentiles of the data and so contains the ‘central half’ of the numbers [...] Finally the standard deviation is a widely used measure of spread. It is the most technically complex measure, but is only really appropriate for well-behaved symmetric data since it is also unduly influenced by outlying values." (David Spiegelhalter, "The Art of Statistics: Learning from Data", 2019)

"[...] the Central Limit Theorem [...] says that the distribution of sample means tends towards the form of a normal distribution with increasing sample size, almost regardless of the shape of the original data distribution." (David Spiegelhalter, "The Art of Statistics: Learning from Data", 2019)

"There is no ‘correct’ way to display sets of numbers: each of the plots we have used has some advantages: strip-charts show individual points, box-and-whisker plots are convenient for rapid visual summaries, and histograms give a good feel for the underlying shape of the data distribution." (David Spiegelhalter, "The Art of Statistics: Learning from Data", 2019)

More quotes on "Distributions" at the-web-of-knowledge.blogspot.com

🔭Data Science: Inference (Just the Quotes)

"Analysis is the obtaining of the thing sought by assuming it and so reasoning up to an admitted truth; synthesis is the obtaining of the thing sought by reasoning up to the inference and proof of it." (Eudoxus, cca. 4th century BC)

"Every stage of science has its train of practical applications and systematic inferences, arising both from the demands of convenience and curiosity, and from the pleasure which, as we have already said, ingenious and active-minded men feel in exercising the process of deduction." (William Whewell, "The Philosophy of the Inductive Sciences Founded Upon Their History", 1840)

"Truths are known to us in two ways: some are known directly, and of themselves; some through the medium of other truths. The former are the subject of Intuition, or Consciousness; the latter, of Inference; the latter of Inference. The truths known by Intuition are the original premises, from which all others are inferred." (John S Mill, "A System of Logic, Ratiocinative and Inductive", 1858)

"It is experience which has given us our first real knowledge of Nature and her laws. It is experience, in the shape of observation and experiment, which has given us the raw material out of which hypothesis and inference have slowly elaborated that richer conception of the material world which constitutes perhaps the chief, and certainly the most characteristic, glory of the modern mind." (Arthur J Balfour, "The Foundations of Belief", 1912)

"The only thing we know for sure about a missing data point is that it is not there, and there is nothing that the magic of statistics can do change that. The best that can be managed is to estimate the extent to which missing data have influenced the inferences we wish to draw." (Howard Wainer, "14 Conversations About Three Things", Journal of Educational and Behavioral Statistics Vol. 35(1, 2010)

"The study of inductive inference belongs to the theory of probability, since observational facts can make a theory only probable but will never make it absolutely certain." (Hans Reichenbach, "The Rise of Scientific Philosophy", 1951)

"Statistics is the name for that science and art which deals with uncertain inferences - which uses numbers to find out something about nature and experience." (Warren Weaver, 1952)

"The heart of all major discoveries in the physical sciences is the discovery of novel methods of representation and so of fresh techniques by which inferences can be drawn - and drawn in ways which fit the phenomena under investigation." (Stephen Toulmin, "The Philosophy of Science", 1957)

"Assumptions that we make, such as those concerning the form of the population sampled, are always untrue." (David R Cox, "Some problems connected with statistical inference", Annals of Mathematical Statistics 29, 1958)

"Exact truth of a null hypothesis is very unlikely except in a genuine uniformity trial." (David R Cox, "Some problems connected with statistical inference", Annals of Mathematical Statistics 29, 1958)

"[...] the test of significance has been carrying too much of the burden of scientific inference. It may well be the case that wise and ingenious investigators can find their way to reasonable conclusions from data because and in spite of their procedures. Too often, however, even wise and ingenious investigators [...] tend to credit the test of significance with properties it does not have." (David Bakan, "The test of significance in psychological research", Psychological Bulletin 66, 1966) 

"[...] we need to get on with the business of generating [...] hypotheses and proceed to do investigations and make inferences which bear on them, instead of [...] testing the statistical null hypothesis in any number of contexts in which we have every reason to suppose that it is false in the first place." (David Bakan, "The test of significance in psychological research", Psychological Bulletin 66, 1966)

"An analogy is a relationship between two entities, processes, or what you will, which allows inferences to be made about one of the things, usually that about which we know least, on the basis of what we know about the other. […] The art of using analogy is to balance up what we know of the likenesses against the unlikenesses between two things, and then on the basis of this balance make an inference as to what is called the neutral analogy, that about which we do not know." (Rom Harré," The Philosophies of Science" , 1972)

"Almost all efforts at data analysis seek, at some point, to generalize the results and extend the reach of the conclusions beyond a particular set of data. The inferential leap may be from past experiences to future ones, from a sample of a population to the whole population, or from a narrow range of a variable to a wider range. The real difficulty is in deciding when the extrapolation beyond the range of the variables is warranted and when it is merely naive. As usual, it is largely a matter of substantive judgment - or, as it is sometimes more delicately put, a matter of 'a priori nonstatistical considerations'." (Edward R Tufte, "Data Analysis for Politics and Policy", 1974)

"Pencil and paper for construction of distributions, scatter diagrams, and run-charts to compare small groups and to detect trends are more efficient methods of estimation than statistical inference that depends on variances and standard errors, as the simple techniques preserve the information in the original data." (W Edwards Deming, "On Probability as Basis for Action", American Statistician, Volume 29, Number 4, November 1975)

"The advantage of semantic networks over standard logic is that some selected set of the possible inferences can be made in a specialized and efficient way. If these correspond to the inferences that people make naturally, then the system will be able to do a more natural sort of reasoning than can be easily achieved using formal logical deduction." (Avron Barr, Natural Language Understanding, AI Magazine Vol. 1 (1), 1980)

"Another reason for the applied statistician to care about Bayesian inference is that consumers of statistical answers, at least interval estimates, commonly interpret them as probability statements about the possible values of parameters. Consequently, the answers statisticians provide to consumers should be capable of being interpreted as approximate Bayesian statements." (Donald B Rubin, "Bayesianly justifiable and relevant frequency calculations for the applied statistician", Annals of Statistics 12(4), 1984)

"The grotesque emphasis on significance tests in statistics courses of all kinds [...] is taught to people, who if they come away with no other notion, will remember that statistics is about tests for significant differences. [...] The apparatus on which their statistics course has been constructed is often worse than irrelevant, it is misleading about what is important in examining data and making inferences." (John A Nelder, "Discussion of Dr Chatfield’s paper", Journal of the Royal Statistical Society A 148, 1985)

"Models are often used to decide issues in situations marked by uncertainty. However statistical differences from data depend on assumptions about the process which generated these data. If the assumptions do not hold, the inferences may not be reliable either. This limitation is often ignored by applied workers who fail to identify crucial assumptions or subject them to any kind of empirical testing. In such circumstances, using statistical procedures may only compound the uncertainty." (David A Greedman & William C Navidi, "Regression Models for Adjusting the 1980 Census", Statistical Science Vol. 1 (1), 1986)

"It is difficult to distinguish deduction from what in other circumstances is called problem-solving. And concept learning, inference, and reasoning by analogy are all instances of inductive reasoning. (Detectives typically induce, rather than deduce.) None of these things can be done separately from each other, or from anything else. They are pseudo-categories." (Frank Smith, "To Think: In Language, Learning and Education", 1990)

"No one has ever shown that he or she had a free lunch. Here, of course, 'free lunch' means 'usefulness of a model that is locally easy to make inferences from'. (John Tukey, "Issues relevant to an honest account of data-based inference, partially in the light of Laurie Davies’ paper", 1993)

"Probabilistic inference is the classical paradigm for data analysis in science and technology. It rests on a foundation of randomness; variation in data is ascribed to a random process in which nature generates data according to a probability distribution. This leads to a codification of uncertainly by confidence intervals and hypothesis tests." (William S Cleveland, "Visualizing Data", 1993)

"When the distributions of two or more groups of univariate data are skewed, it is common to have the spread increase monotonically with location. This behavior is monotone spread. Strictly speaking, monotone spread includes the case where the spread decreases monotonically with location, but such a decrease is much less common for raw data. Monotone spread, as with skewness, adds to the difficulty of data analysis. For example, it means that we cannot fit just location estimates to produce homogeneous residuals; we must fit spread estimates as well. Furthermore, the distributions cannot be compared by a number of standard methods of probabilistic inference that are based on an assumption of equal spreads; the standard t-test is one example. Fortunately, remedies for skewness can cure monotone spread as well." (William S Cleveland, "Visualizing Data", 1993)

"In the design of experiments, one has to use some informal prior knowledge. How does one construct blocks in a block design problem for instance? It is stupid to think that use is not made of a prior. But knowing that this prior is utterly casual, it seems ludicrous to go through a lot of integration, etc., to obtain ‘exact’ posterior probabilities resulting from this prior. So, I believe the situation with respect to Bayesian inference and with respect to inference, in general, has not made progress. Well, Bayesian statistics has led to a great deal of theoretical research. But I don’t see any real utilizations in applications, you know. Now no one, as far as I know, has examined the question of whether the inferences that are obtained are, in fact, realized in the predictions that they are used to make." (Oscar Kempthorne, "A conversation with Oscar Kempthorne", Statistical Science vol. 10, 1995)

"The science of statistics may be described as exploring, analyzing and summarizing data; designing or choosing appropriate ways of collecting data and extracting information from them; and communicating that information. Statistics also involves constructing and testing models for describing chance phenomena. These models can be used as a basis for making inferences and drawing conclusions and, finally, perhaps for making decisions." (Fergus Daly et al, "Elements of Statistics", 1995)

"Theories rarely arise as patient inferences forced by accumulated facts. Theories are mental constructs potentiated by complex external prods (including, in idealized cases, a commanding push from empirical reality)." (Stephen J Gould, "Leonardo's Mountain of Clams and the Diet of Worms" , 1998)

"Let us regard a proof of an assertion as a purely mechanical procedure using precise rules of inference starting with a few unassailable axioms. This means that an algorithm can be devised for testing the validity of an alleged proof simply by checking the successive steps of the argument; the rules of inference constitute an algorithm for generating all the statements that can be deduced in a finite number of steps from the axioms." (Edward Beltrami, "What is Random?: Chaos and Order in Mathematics and Life", 1999)

"[…] philosophical theories are structured by conceptual metaphors that constrain which inferences can be drawn within that philosophical theory. The (typically unconscious) conceptual metaphors that are constitutive of a philosophical theory have the causal effect of constraining how you can reason within that philosophical framework." (George Lakoff, "Philosophy in the Flesh: The Embodied Mind and its Challenge to Western Thought", 1999)

"Even if our cognitive maps of causal structure were perfect, learning, especially double-loop learning, would still be difficult. To use a mental model to design a new strategy or organization we must make inferences about the consequences of decision rules that have never been tried and for which we have no data. To do so requires intuitive solution of high-order nonlinear differential equations, a task far exceeding human cognitive capabilities in all but the simplest systems."  (John D Sterman, "Business Dynamics: Systems thinking and modeling for a complex world", 2000)

"Bayesian inference is a controversial approach because it inherently embraces a subjective notion of probability. In general, Bayesian methods provide no guarantees on long run performance." (Larry A Wasserman, "All of Statistics: A concise course in statistical inference", 2004)

"The Bayesian approach is based on the following postulates: (B1) Probability describes degree of belief, not limiting frequency. As such, we can make probability statements about lots of things, not just data which are subject to random variation. […] (B2) We can make probability statements about parameters, even though they are fixed constants. (B3) We make inferences about a parameter θ by producing a probability distribution for θ. Inferences, such as point estimates and interval estimates, may then be extracted from this distribution." (Larry A Wasserman, "All of Statistics: A concise course in statistical inference", 2004)

"Statistical inference, or 'learning' as it is called in computer science, is the process of using data to infer the distribution that generated the data." (Larry A Wasserman, "All of Statistics: A concise course in statistical inference", 2004)

"A mental model is conceived […] as a knowledge structure possessing slots that can be filled not only with empirically gained information but also with ‘default assumptions’ resulting from prior experience. These default assumptions can be substituted by updated information so that inferences based on the model can be corrected without abandoning the model as a whole. Information is assimilated to the slots of a mental model in the form of ‘frames’ which are understood here as ‘chunks’ of knowledge with a well-defined meaning anchored in a given body of shared knowledge." (Jürgen Renn, "Before the Riemann Tensor: The Emergence of Einstein’s Double Strategy", "The Universe of General Relativity" Ed. by A.J. Kox & Jean Eisenstaedt, 2005)

"Statistics is the branch of mathematics that uses observations and measurements called data to analyze, summarize, make inferences, and draw conclusions based on the data gathered." (Allan G Bluman, "Probability Demystified", 2005)

"In specific cases, we think by applying mental rules, which are similar to rules in computer programs. In most of the cases, however, we reason by constructing, inspecting, and manipulating mental models. These models and the processes that manipulate them are the basis of our competence to reason. In general, it is believed that humans have the competence to perform such inferences error-free. Errors do occur, however, because reasoning performance is limited by capacities of the cognitive system, misunderstanding of the premises, ambiguity of problems, and motivational factors. Moreover, background knowledge can significantly influence our reasoning performance. This influence can either be facilitation or an impedance of the reasoning process." (Carsten Held et al, "Mental Models and the Mind", 2006)

"[…] statistics is the key discipline for predicting the future or for making inferences about the unknown, or for producing convenient summaries of data." (David J Hand, "Statistics: A Very Short Introduction", 2008)

"The only thing we know for sure about a missing data point is that it is not there, and there is nothing that the magic of statistics can do change that. The best that can be managed is to estimate the extent to which missing data have influenced the inferences we wish to draw." (Howard Wainer, "14 Conversations About Three Things", Journal of Educational and Behavioral Statistics Vol. 35(1, 2010)

"When statistical inferences, such as p-values, follow extensive looks at the data, they no longer have their usual interpretation. Ignoring this reality is dishonest: it is like painting a bull’s eye around the landing spot of your arrow. This is known in some circles as p-hacking, and much has been written about its perils and pitfalls." (Robert E Kass et all, "Ten Simple Rules for Effective Statistical Practice", PLoS Comput Biol 12(6), 2016)

"Inference is to bring about a new thought, which in logic amounts to drawing a conclusion, and more generally involves using what we already know, and what we see or observe, to update prior beliefs. […] Inference is also a leap of sorts, deemed reasonable […] Inference is a basic cognitive act for intelligent minds. If a cognitive agent (a person, an AI system) is not intelligent, it will infer badly. But any system that infers at all must have some basic intelligence, because the very act of using what is known and what is observed to update beliefs is inescapably tied up with what we mean by intelligence. If an AI system is not inferring at all, it doesn’t really deserve to be called AI." (Erik J Larson, "The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do", 2021)

"In statistical inference and machine learning, we often talk about estimates and estimators. Estimates are basically our best guesses regarding some quantities of interest given (finite) data. Estimators are computational devices or procedures that allow us to map between a given (finite) data sample and an estimate of interest." (Aleksander Molak, "Causal Inference and Discovery in Python", 2023)

"The basic goal of causal inference is to estimate the causal effect of one set of variables on another. In most cases, to do it accurately, we need to know which variables we should control for. [...] to accurately control for confounders, we need to go beyond the realm of pure statistics and use the information about the data-generating process, which can be encoded as a (causal) graph. In this sense, the ability to translate between graphical and statistical properties is central to causal inference." (Aleksander Molak, "Causal Inference and Discovery in Python", 2023)

"Statistics is the science, the art, the philosophy, and the technique of making inferences from the particular to the general." (John W Tukey)

"The old rule of trusting the Central Limit Theorem if the sample size is larger than 30 is just that–old. Bootstrap and permutation testing let us more easily do inferences for a wider variety of statistics." (Tim Hesterberg)

More quotes on "Inference" at the-web-of-knowledge.blogspot.com,

08 December 2018

🔭Data Science: Creativity (Just the Quotes)

"[…] science conceived as resting on mere sense-perception, with no other source of observation, is bankrupt, so far as concerns its claim to self-sufficiency. Science can find no individual enjoyment in nature: Science can find no aim in nature: Science can find no creativity in nature; it finds mere rules of succession. These negations are true of Natural Science. They are inherent in it methodology." (Alfred N Whitehead, "Modes of Thought", 1938)

"The design process involves a series of operations. In map design, it is convenient to break this sequence into three stages. In the first stage, you draw heavily on imagination and creativity. You think of various graphic possibilities, consider alternative ways." (Arthur H Robinson, "Elements of Cartography", 1953)

"At each level of complexity, entirely new properties appear. [And] at each stage, entirely new laws, concepts, and generalizations are necessary, requiring inspiration and creativity to just as great a degree as in the previous one." (Herb Anderson, 1972)

"Facts do not ‘speak for themselves’; they are read in the light of theory. Creative thought, in science as much as in the arts, is the motor of changing opinion. Science is a quintessentially human activity, not a mechanized, robot-like accumulation of objective information, leading by laws of logic to inescapable interpretation." (Stephen J Gould, "Ever Since Darwin", 1977)

"Science is not a heartless pursuit of objective information. It is a creative human activity, its geniuses acting more as artists than information processors. Changes in theory are not simply the derivative results of the new discoveries but the work of creative imagination influenced by contemporary social and political forces." (Stephen J Gould, "Ever Since Darwin: Reflections in Natural History", 1977)

"Science, since people must do it, is a socially embedded activity. It progresses by hunch, vision, and intuition. Much of its change through time does not record a closer approach to absolute truth, but the alteration of cultural contexts that influence it so strongly. Facts are not pure and unsullied bits of information; culture also influences what we see and how we see it. Theories, moreover, are not inexorable inductions from facts. The most creative theories are often imaginative visions imposed upon facts; the source of imagination is also strongly cultural." (Stephen J Gould, "The Mismeasure of Man", 1980) 

"Some methods, such as those governing the design of experiments or the statistical treatment of data, can be written down and studied. But many methods are learned only through personal experience and interactions with other scientists. Some are even harder to describe or teach. Many of the intangible influences on scientific discovery - curiosity, intuition, creativity - largely defy rational analysis, yet they are often the tools that scientists bring to their work." (Committee on the Conduct of Science, "On Being a Scientist", 1989)

"All of engineering involves some creativity to cover the parts not known, and almost all of science includes some practical engineering to translate the abstractions into practice." (Richard W Hamming, "The Art of Probability for Scientists and Engineers", 1991)

"Good engineering is not a matter of creativity or centering or grounding or inspiration or lateral thinking, as useful as those might be, but of decoding the clever, even witty, messages the solution space carves on the corpses of the ideas in which you believed with all your heart, and then building the road to the next message." (Fred Hapgood, "Up the infinite Corridor: MIT and the Technical Imagination", 1993) 

"[…] creativity is the ability to see the obvious over the long term, and not to be restrained by short-term conventional wisdom." (Arthur J Birch, "To See the Obvious", 1995)

"Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, they just saw something. It seemed obvious to them after a while. That’s because they were able to connect experiences they’ve had and synthesize new things." (Steve Jobs, 1996)

"The pursuit of science is more than the pursuit of understanding. It is driven by the creative urge, the urge to construct a vision, a map, a picture of the world that gives the world a little more beauty and coherence than it had before." (John A Wheeler, "Geons, Black Holes, and Quantum Foam: A Life in Physics", 1998)

"Simple observation generally gets us nowhere. It is the creative imagination that increases our understanding by finding connections between apparently unrelated phenomena, and forming logical, consistent theories to explain them. And if a theory turns out to be wrong, as many do, all is not lost. The struggle to create an imaginative, correct picture of reality frequently tells us where to go next, even when science has temporarily followed the wrong path." (Richard Morris, "The Universe, the Eleventh Dimension, and Everything: What We Know and How We Know It", 1999)

"Science, and physics in particular, has developed out of the Newtonian paradigm of mechanics. In this world view, every phenomenon we observe can be reduced to a collection of atoms or particles, whose movement is governed by the deterministic laws of nature. Everything that exists now has already existed in some different arrangement in the past, and will continue to exist so in the future. In such a philosophy, there seems to be no place for novelty or creativity." (Francis Heylighen, "The science of self-organization and adaptivity", 2001) 

"Evolution moves towards greater complexity, greater elegance, greater knowledge, greater intelligence, greater beauty, greater creativity, and greater levels of subtle attributes such as love. […] Of course, even the accelerating growth of evolution never achieves an infinite level, but as it explodes exponentially it certainly moves rapidly in that direction." (Ray Kurzweil, "The Singularity is Near", 2005)

"Systemic problems trace back in the end to worldviews. But worldviews themselves are in flux and flow. Our most creative opportunity of all may be to reshape those worldviews themselves. New ideas can change everything." (Anthony Weston, "How to Re-Imagine the World", 2007)

More quotes on "Creativity" at the-web-of-knowledge.blogspot.com

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 25 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.