Showing posts with label algorithms. Show all posts
Showing posts with label algorithms. Show all posts

20 August 2019

🛡️Information Security: Cryptanalysis (Definitions)

"Cryptanalysis is the science of analyzing cryptographic methods and algorithms, generally probing them for weaknesses. Cryptanalysts devise new methods of defeating cryptographic algorithms." (Michael Coles & Rodney Landrum, , "Expert SQL Server 2008 Encryption", 2008)

"The science (or art) of breaking cryptographic algorithms." (Mark S Merkow & Lakshmikanth Raghavan, "Secure and Resilient Software Development", 2010)

"The study of mathematical techniques designed to defeat cryptographic techniques. Collectively, a branch of science that deals with cryptography and cryptanalysis is called cryptology. " (Alex Berson & Lawrence Dubov, "Master Data Management and Data Governance", 2010)

"The art of breaking ciphertext." (Manish Agrawal, "Information Security and IT Risk Management", 2014)

"Practice of uncovering flaws within cryptosystems." (Adam Gordon, "Official (ISC)2 Guide to the CISSP CBK" 4th Ed., 2015)

"The process of decrypting a message without knowing the cipher or key used to encrypt it" (Nell Dale & John Lewis, "Computer Science Illuminated" 6th Ed., 2015)

"The practice of breaking cryptosystems and algorithms used in encryption and decryption processes." (Shon Harris & Fernando Maymi, "CISSP All-in-One Exam Guide" 8th Ed., 2018)

"The process of breaking encryption without the benefit of the key under which data was encrypted." (O Sami Saydjari, "Engineering Trustworthy Systems: Get Cybersecurity Design Right the First Time", 2018)

"Cryptanalysis refers to the study of the cryptosystem or ciphertext to crack the confidentiality of the underlying information and try to gain unauthorized access to the content." (Shafali Agarwal, "Preserving Information Security Using Fractal-Based Cryptosystem", Handbook of Research on Cyber Crime and Information Privacy, 2021)

19 August 2019

🛡️Information Security: Public Key Cryptography [PKC] (Definitions)

"Also known as asymmetric cryptography, a form of cryptography in which a user has a pair of cryptographic keys - a public key and a private key. The private key is kept secret, while the public key may be widely distributed. The keys are related mathematically, but the private key cannot be practically derived from the public key. A message encrypted with the public key can be decrypted only with the corresponding private key." (Martin Oberhofer et al, "Enterprise Master Data Management", 2008)

"Cryptography involving public keys, as opposed to cryptography making use of shared secrets. See Symmetric cryptography." (Mark S Merkow & Lakshmikanth Raghavan, "Secure and Resilient Software Development", 2010)

"An approach to cryptography in which each user has two related keys, one public and one private" (Nell Dale & John Lewis, "Computer Science Illuminated" 6th Ed., 2015)

"An asymmetric cryptosystem where the encrypting and decrypting keys are different and it is computationally infeasible to calculate one form the other, given the encrypting algorithm. In public key cryptography, the encrypting key is made public, but the decrypting key is kept secret." (Adam Gordon, "Official (ISC)2 Guide to the CISSP CBK 4th Ed.", 2015)

"An encryption method that uses a two-part key: a public key and a private key. Users generally distribute their public key but keep their private key to themselves. This is also known as asymmetric cryptography." (James R Kalyvas & Michael R Overly, "Big Data: A business and legal guide", 2015)

"Encryption system using a public-private key pair for encryption or digital signature. The encrypt and decrypt keys are different, and one cannot derive the private key from the public key." (O Sami Saydjari, "Engineering Trustworthy Systems: Get Cybersecurity Design Right the First Time", 2018)

"Public-key cryptography refers to a cryptographic system requiring two separate keys, one of which is secret and one of which is public. Although different, the two parts of the key pair are mathematically linked. One key locks or encrypts the plaintext, and the other unlocks or decrypts the cipher text. Neither key can perform both functions by itself. The public key may be published without compromising security, while the private key must not be revealed to anyone not authorized to read the messages." (Addepalli V N Krishna & M Balamurugan, "Security Mechanisms in Cloud Computing-Based Big Data", 2019)

"A cryptographic system that requires public and private keys. The private key can decrypt messages encrypted with the corresponding public key, and vice versa. The public key can be made available to the public without compromising security and used to verify that messages sent by the holder of the private key must be genuine." (Daniel Leuck et al, "Learning Java" 5th Ed., 2020)

14 January 2019

🔬Data Science: Evolutionary Algorithm (Definitions)

"An Evolutionary Algorithm (EA) is a general class of fitting or maximization techniques. They all maintain a pool of structures or models that can be mutated and evolve. At every stage in the algorithm, each model is graded and the better models are allowed to reproduce or mutate for the next round. Some techniques allow the successful models to crossbreed. They are all motivated by the biologic process of evolution. Some techniques are asexual (so, there is no crossbreeding between techniques) while others are bisexual, allowing successful models to swap ''genetic' information. The asexual models allow a wide variety of different models to compete, while sexual methods require that the models share a common 'genetic' code." (William J Raynor Jr., "The International Dictionary of Artificial Intelligence", 1999)

"Meta-heuristic optimization approach inspired by natural evolution, which begins with potential solution models, then iteratively applies algorithms to find the fittest models from the set to serve as inputs to the next iteration, ultimately leading to a sub-optimal solution which is close to the optimal one." (Gilles Lebrun et al, "EA Multi-Model Selection for SVM", 2009)

"Evolutionary algorithms are search methods that can be used for solving optimization problems. They mimic working principles from natural evolution by employing a population–based approach, labeling each individual of the population with a fitness and including elements of random, albeit the random is directed through a selection process." (Ivan Zelinka & Hendrik Richter, "Evolutionary Algorithms for Chaos Researchers", Studies in Computational Intelligence Vol. 267, 2010)

"Population-based optimization algorithms in which each member of the population represents a candidate solution. In an iterative process the population members evolve and are then evaluated by a fitness function. Genetic Algorithms and Particle Swarm Optimization are examples of evolutionary algorithms." (Efstathios Kirkos, "Composite Classifiers for Bankruptcy Prediction", 2014)

"A collective term for all variants of (probabilistic) optimization and approximation algorithms that are inspired by Darwinian evolution. Optimal states are approximated by successive improvements based on the variation-selection paradigm. Thereby, the variation operators produce genetic diversity and the selection directs the evolutionary search." (Harish Garg, "A Hybrid GA-GSA Algorithm for Optimizing the Performance of an Industrial System by Utilizing Uncertain Data", 2015)

31 December 2018

🔭Data Science: Big Data (Just the Quotes)

"If we gather more and more data and establish more and more associations, however, we will not finally find that we know something. We will simply end up having more and more data and larger sets of correlations." (Kenneth N Waltz, "Theory of International Politics Source: Theory of International Politics", 1979)

“There are those who try to generalize, synthesize, and build models, and there are those who believe nothing and constantly call for more data. The tension between these two groups is a healthy one; science develops mainly because of the model builders, yet they need the second group to keep them honest.” (Andrew Miall, “Principles of Sedimentary Basin Analysis”, 1984)

"Big data can change the way social science is performed, but will not replace statistical common sense." (Thomas Landsall-Welfare, "Nowcasting the mood of the nation", Significance 9(4), 2012)

"Big Data is data that exceeds the processing capacity of conventional database systems. The data is too big, moves too fast, or doesn’t fit the strictures of your database architectures. To gain value from this data, you must choose an alternative way to process it." (Edd Wilder-James, "What is big data?", 2012) [source]

"The secret to getting the most from Big Data isn’t found in huge server farms or massive parallel computing or in-memory algorithms. Instead, it’s in the almighty pencil." (Matt Ariker, "The One Tool You Need To Make Big Data Work: The Pencil", 2012)

"Big data is the most disruptive force this industry has seen since the introduction of the relational database." (Jeffrey Needham, "Disruptive Possibilities: How Big Data Changes Everything", 2013)

"No subjective metric can escape strategic gaming [...] The possibility of mischief is bottomless. Fighting ratings is fruitless, as they satisfy a very human need. If one scheme is beaten down, another will take its place and wear its flaws. Big Data just deepens the danger. The more complex the rating formulas, the more numerous the opportunities there are to dress up the numbers. The larger the data sets, the harder it is to audit them." (Kaiser Fung, "Numbersense: How To Use Big Data To Your Advantage", 2013)

"There is convincing evidence that data-driven decision-making and big data technologies substantially improve business performance. Data science supports data-driven decision-making - and sometimes conducts such decision-making automatically - and depends upon technologies for 'big data' storage and engineering, but its principles are separate." (Foster Provost & Tom Fawcett, "Data Science for Business", 2013)

"Our needs going forward will be best served by how we make use of not just this data but all data. We live in an era of Big Data. The world has seen an explosion of information in the past decades, so much so that people and institutions now struggle to keep pace. In fact, one of the reasons for the attachment to the simplicity of our indicators may be an inverse reaction to the sheer and bewildering volume of information most of us are bombarded by on a daily basis. […] The lesson for a world of Big Data is that in an environment with excessive information, people may gravitate toward answers that simplify reality rather than embrace the sheer complexity of it." (Zachary Karabell, "The Leading Indicators: A short history of the numbers that rule our world", 2014)

"The other buzzword that epitomizes a bias toward substitution is 'big data'. Today’s companies have an insatiable appetite for data, mistakenly believing that more data always creates more value. But big data is usually dumb data. Computers can find patterns that elude humans, but they don’t know how to compare patterns from different sources or how to interpret complex behaviors. Actionable insights can only come from a human analyst (or the kind of generalized artificial intelligence that exists only in science fiction)." (Peter Thiel & Blake Masters, "Zero to One: Notes on Startups, or How to Build the Future", 2014)

"We have let ourselves become enchanted by big data only because we exoticize technology. We’re impressed with small feats accomplished by computers alone, but we ignore big achievements from complementarity because the human contribution makes them less uncanny. Watson, Deep Blue, and ever-better machine learning algorithms are cool. But the most valuable companies in the future won’t ask what problems can be solved with computers alone. Instead, they’ll ask: how can computers help humans solve hard problems?" (Peter Thiel & Blake Masters, "Zero to One: Notes on Startups, or How to Build the Future", 2014)

"As business leaders we need to understand that lack of data is not the issue. Most businesses have more than enough data to use constructively; we just don't know how to use it. The reality is that most businesses are already data rich, but insight poor." (Bernard Marr, Big Data: Using SMART Big Data, Analytics and Metrics To Make Better Decisions and Improve Performance, 2015)

"Big data is based on the feedback economy where the Internet of Things places sensors on more and more equipment. More and more data is being generated as medical records are digitized, more stores have loyalty cards to track consumer purchases, and people are wearing health-tracking devices. Generally, big data is more about looking at behavior, rather than monitoring transactions, which is the domain of traditional relational databases. As the cost of storage is dropping, companies track more and more data to look for patterns and build predictive models." (Neil Dunlop, "Big Data", 2015)

"Big Data often seems like a meaningless buzz phrase to older database professionals who have been experiencing exponential growth in database volumes since time immemorial. There has never been a moment in the history of database management systems when the increasing volume of data has not been remarkable." (Guy Harrison, "Next Generation Databases: NoSQL, NewSQL, and Big Data", 2015)

"Dimensionality reduction is essential for coping with big data - like the data coming in through your senses every second. A picture may be worth a thousand words, but it’s also a million times more costly to process and remember. [...] A common complaint about big data is that the more data you have, the easier it is to find spurious patterns in it. This may be true if the data is just a huge set of disconnected entities, but if they’re interrelated, the picture changes." (Pedro Domingos, "The Master Algorithm", 2015)

"Science’s predictions are more trustworthy, but they are limited to what we can systematically observe and tractably model. Big data and machine learning greatly expand that scope. Some everyday things can be predicted by the unaided mind, from catching a ball to carrying on a conversation. Some things, try as we might, are just unpredictable. For the vast middle ground between the two, there’s machine learning." (Pedro Domingos, "The Master Algorithm", 2015)

"The human side of analytics is the biggest challenge to implementing big data." (Paul Gibbons, "The Science of Successful Organizational Change", 2015)

"To make progress, every field of science needs to have data commensurate with the complexity of the phenomena it studies. [...] With big data and machine learning, you can understand much more complex phenomena than before. In most fields, scientists have traditionally used only very limited kinds of models, like linear regression, where the curve you fit to the data is always a straight line. Unfortunately, most phenomena in the world are nonlinear. [...] Machine learning opens up a vast new world of nonlinear models." (Pedro Domingos, "The Master Algorithm", 2015)

"Underfitting is when a model doesn’t take into account enough information to accurately model real life. For example, if we observed only two points on an exponential curve, we would probably assert that there is a linear relationship there. But there may not be a pattern, because there are only two points to reference. [...] It seems that the best way to mitigate underfitting a model is to give it more information, but this actually can be a problem as well. More data can mean more noise and more problems. Using too much data and too complex of a model will yield something that works for that particular data set and nothing else." (Matthew Kirk, "Thoughtful Machine Learning", 2015)

"We are moving slowly into an era where Big Data is the starting point, not the end." (Pearl Zhu, "Digital Master: Debunk the Myths of Enterprise Digital Maturity", 2015)

"A popular misconception holds that the era of Big Data means the end of a need for sampling. In fact, the proliferation of data of varying quality and relevance reinforces the need for sampling as a tool to work efficiently with a variety of data, and minimize bias. Even in a Big Data project, predictive models are typically developed and piloted with samples." (Peter C Bruce & Andrew G Bruce, "Statistics for Data Scientists: 50 Essential Concepts", 2016)

"Big data is, in a nutshell, large amounts of data that can be gathered up and analyzed to determine whether any patterns emerge and to make better decisions." (Daniel Covington, Analytics: Data Science, Data Analysis and Predictive Analytics for Business, 2016)

"Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something only humans can provide. We have to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead. Sometimes that will mean putting fairness ahead of profit." (Cathy O'Neil, "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy", 2016)

"While Big Data, when managed wisely, can provide important insights, many of them will be disruptive. After all, it aims to find patterns that are invisible to human eyes. The challenge for data scientists is to understand the ecosystems they are wading into and to present not just the problems but also their possible solutions." (Cathy O'Neil, "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy", 2016)

"Big Data allows us to meaningfully zoom in on small segments of a dataset to gain new insights on who we are." (Seth Stephens-Davidowitz, "Everybody Lies: What the Internet Can Tell Us About Who We Really Are", 2017)

"Effects without an understanding of the causes behind them, on the other hand, are just bunches of data points floating in the ether, offering nothing useful by themselves. Big Data is information, equivalent to the patterns of light that fall onto the eye. Big Data is like the history of stimuli that our eyes have responded to. And as we discussed earlier, stimuli are themselves meaningless because they could mean anything. The same is true for Big Data, unless something transformative is brought to all those data sets… understanding." (Beau Lotto, "Deviate: The Science of Seeing Differently", 2017)

"The term [Big Data] simply refers to sets of data so immense that they require new methods of mathematical analysis, and numerous servers. Big Data - and, more accurately, the capacity to collect it - has changed the way companies conduct business and governments look at problems, since the belief wildly trumpeted in the media is that this vast repository of information will yield deep insights that were previously out of reach." (Beau Lotto, "Deviate: The Science of Seeing Differently", 2017)

"There are other problems with Big Data. In any large data set, there are bound to be inconsistencies, misclassifications, missing data - in other words, errors, blunders, and possibly lies. These problems with individual items occur in any data set, but they are often hidden in a large mass of numbers even when these numbers are generated out of computer interactions." (David S Salsburg, "Errors, Blunders, and Lies: How to Tell the Difference", 2017)

"Just as they did thirty years ago, machine learning programs (including those with deep neural networks) operate almost entirely in an associational mode. They are driven by a stream of observations to which they attempt to fit a function, in much the same way that a statistician tries to fit a line to a collection of points. Deep neural networks have added many more layers to the complexity of the fitted function, but raw data still drives the fitting process. They continue to improve in accuracy as more data are fitted, but they do not benefit from the 'super-evolutionary speedup'."  (Judea Pearl & Dana Mackenzie, "The Book of Why: The new science of cause and effect", 2018)

"One of the biggest myths is the belief that data science is an autonomous process that we can let loose on our data to find the answers to our problems. In reality, data science requires skilled human oversight throughout the different stages of the process. [...] The second big myth of data science is that every data science project needs big data and needs to use deep learning. In general, having more data helps, but having the right data is the more important requirement. [...] A third data science myth is that modern data science software is easy to use, and so data science is easy to do. [...] The last myth about data science [...] is the belief that data science pays for itself quickly. The truth of this belief depends on the context of the organization. Adopting data science can require significant investment in terms of developing data infrastructure and hiring staff with data science expertise. Furthermore, data science will not give positive results on every project." (John D Kelleher & Brendan Tierney, "Data Science", 2018)

"Apart from the technical challenge of working with the data itself, visualization in big data is different because showing the individual observations is just not an option. But visualization is essential here: for analysis to work well, we have to be assured that patterns and errors in the data have been spotted and understood. That is only possible by visualization with big data, because nobody can look over the data in a table or spreadsheet." (Robert Grant, "Data Visualization: Charts, Maps and Interactive Graphics", 2019)

"With the growing availability of massive data sets and user-friendly analysis software, it might be thought that there is less need for training in statistical methods. This would be naïve in the extreme. Far from freeing us from the need for statistical skills, bigger data and the rise in the number and complexity of scientific studies makes it even more difficult to draw appropriate conclusions. More data means that we need to be even more aware of what the evidence is actually worth." (David Spiegelhalter, "The Art of Statistics: Learning from Data", 2019)

"Big data is revolutionizing the world around us, and it is easy to feel alienated by tales of computers handing down decisions made in ways we don’t understand. I think we’re right to be concerned. Modern data analytics can produce some miraculous results, but big data is often less trustworthy than small data. Small data can typically be scrutinized; big data tends to be locked away in the vaults of Silicon Valley. The simple statistical tools used to analyze small datasets are usually easy to check; pattern-recognizing algorithms can all too easily be mysterious and commercially sensitive black boxes." (Tim Harford, "The Data Detective: Ten easy rules to make sense of statistics", 2020)

"Making big data work is harder than it seems. Statisticians have spent the past two hundred years figuring out what traps lie in wait when we try to understand the world through data. The data are bigger, faster, and cheaper these days, but we must not pretend that the traps have all been made safe. They have not." (Tim Harford, "The Data Detective: Ten easy rules to make sense of statistics", 2020)

"Many people have strong intuitions about whether they would rather have a vital decision about them made by algorithms or humans. Some people are touchingly impressed by the capabilities of the algorithms; others have far too much faith in human judgment. The truth is that sometimes the algorithms will do better than the humans, and sometimes they won’t. If we want to avoid the problems and unlock the promise of big data, we’re going to need to assess the performance of the algorithms on a case-by-case basis. All too often, this is much harder than it should be. […] So the problem is not the algorithms, or the big datasets. The problem is a lack of scrutiny, transparency, and debate." (Tim Harford, "The Data Detective: Ten easy rules to make sense of statistics", 2020)

"The problem is the hype, the notion that something magical will emerge if only we can accumulate data on a large enough scale. We just need to be reminded: Big data is not better; it’s just bigger. And it certainly doesn’t speak for itself." (Carl T Bergstrom & Jevin D West, "Calling Bullshit: The Art of Skepticism in a Data-Driven World", 2020)

"[...] the focus on Big Data AI seems to be an excuse to put forth a number of vague and hand-waving theories, where the actual details and the ultimate success of neuroscience is handed over to quasi- mythological claims about the powers of large datasets and inductive computation. Where humans fail to illuminate a complicated domain with testable theory, machine learning and big data supposedly can step in and render traditional concerns about finding robust theories. This seems to be the logic of Data Brain efforts today. (Erik J Larson, "The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do", 2021)

"We live on islands surrounded by seas of data. Some call it 'big data'. In these seas live various species of observable phenomena. Ideas, hypotheses, explanations, and graphics also roam in the seas of data and can clarify the waters or allow unsupported species to die. These creatures thrive on visual explanation and scientific proof. Over time new varieties of graphical species arise, prompted by new problems and inner visions of the fishers in the seas of data." (Michael Friendly & Howard Wainer, "A History of Data Visualization and Graphic Communication", 2021)

"Visualizations can remove the background noise from enormous sets of data so that only the most important points stand out to the intended audience. This is particularly important in the era of big data. The more data there is, the more chance for noise and outliers to interfere with the core concepts of the data set." (Kate Strachnyi, "ColorWise: A Data Storyteller’s Guide to the Intentional Use of Color", 2023)

"Visualisation is fundamentally limited by the number of pixels you can pump to a screen. If you have big data, you have way more data than pixels, so you have to summarise your data. Statistics gives you lots of really good tools for this." (Hadley Wickham)

25 December 2018

🔭Data Science: Data Scientists (Just the quotes)

"[...] be wary of analysts that try to quantify the unquantifiable." (Ralph Keeney & Raiffa Howard, "Decisions with Multiple Objectives: Preferences and Value Trade-offs", 1976)

"Most people like to believe something is or is not true. Great scientists tolerate ambiguity very well. They believe the theory enough to go ahead; they doubt it enough to notice the errors and faults so they can step forward and create the new replacement theory. If you believe too much you'll never notice the flaws; if you doubt too much you won't get started. It requires a lovely balance." (Richard W Hamming, "You and Your Research", 1986) 

"Many new data scientists tend to rush past it to get their data into a minimally acceptable state, only to discover that the data has major quality issues after they apply their (potentially computationally intensive) algorithm and get a nonsense answer as output. (Sandy Ryza, "Advanced Analytics with Spark: Patterns for Learning from Data at Scale", 2009)

"Data scientists combine entrepreneurship with patience, the willingness to build data products incrementally, the ability to explore, and the ability to iterate over a solution. They are inherently interdisciplinary. They can tackle all aspects of a problem, from initial data collection and data conditioning to drawing conclusions. They can think outside the box to come up with new ways to view the problem, or to work with very broadly defined problems: 'there’s a lot of data, what can you make from it?'" (Mike Loukides, "What Is Data Science?", 2011)

"As data scientists, we prefer to interact with the raw data. We know how to import it, transform it, mash it up with other data sources, and visualize it. Most of your customers can’t do that. One of the biggest challenges of developing a data product is figuring out how to give data back to the user. Giving back too much data in a way that’s overwhelming and paralyzing is 'data vomit'. It’s natural to build the product that you would want, but it’s very easy to overestimate the abilities of your users. The product you want may not be the product they want." (Dhanurjay Patil, "Data Jujitsu: The Art of Turning Data into Product", 2012)

"In an emergency, a data product that just produces more data is of little use. Data scientists now have the predictive tools to build products that increase the common good, but they need to be aware that building the models is not enough if they do not also produce optimized, implementable outcomes." (Jeremy Howard et al, "Designing Great Data Products", 2012)

"Smart data scientists don’t just solve big, hard problems; they also have an instinct for making big problems small." (Dhanurjay Patil, "Data Jujitsu: The Art of Turning Data into Product", 2012)

"More generally, a data scientist is someone who knows how to extract meaning from and interpret data, which requires both tools and methods from statistics and machine learning, as well as being human. She spends a lot of time in the process of collecting, cleaning, and munging data, because data is never clean. This process requires persistence, statistics, and software engineering skills - skills that are also necessary for understanding biases in the data, and for debugging logging output from code. Once she gets the data into shape, a crucial part is exploratory data analysis, which combines visualization and data sense. She’ll find patterns, build models, and algorithms - some with the intention of understanding product usage and the overall health of the product, and others to serve as prototypes that ultimately get baked back into the product. She may design experiments, and she is a critical part of data-driven decision making. She’ll communicate with team members, engineers, and leadership in clear language and with data visualizations so that even if her colleagues are not immersed in the data themselves, they will understand the implications." (Rachel Schutt, "Doing Data Science: Straight Talk from the Frontline", 2013)

"Unfortunately, creating an objective function that matches the true goal of the data mining is usually impossible, so data scientists often choose based on faith and experience." (Foster Provost, "Data Science for Business", 2013)

"[...] a data scientist role goes beyond the collection and reporting on data; it must involve looking at a business The role of a data scientist goes beyond the collection and reporting on data. application or process from multiple vantage points and determining what the main questions and follow-ups are, as well as recommending the most appropriate ways to employ the data at hand." (Jesús Rogel-Salazar, "Data Science and Analytics with Python", 2017)

"In terms of characteristics, a data scientist has an inquisitive mind and is prepared to explore and ask questions, examine assumptions and analyse processes, test hypotheses and try out solutions and, based on evidence, communicate informed conclusions, recommendations and caveats to stakeholders and decision makers." (Jesús Rogel-Salazar, "Data Science and Analytics with Python", 2017)

"Repeated observations of the same phenomenon do not always produce the same results, due to random noise or error. Sampling errors result when our observations capture unrepresentative circumstances, like measuring rush hour traffic on weekends as well as during the work week. Measurement errors reflect the limits of precision inherent in any sensing device. The notion of signal to noise ratio captures the degree to which a series of observations reflects a quantity of interest as opposed to data variance. As data scientists, we care about changes in the signal instead of the noise, and such variance often makes this problem surprisingly difficult." (Steven S Skiena, "The Data Science Design Manual", 2017)

"Data scientists should have some domain expertise. Most data science projects begin with a real-world, domain-specific problem and the need to design a data-driven solution to this problem. As a result, it is important for a data scientist to have enough domain expertise that they understand the problem, why it is important, an dhow a data science solution to the problem might fit into an organization’s processes. This domain expertise guides the data scientist as she works toward identifying an optimized solution." (John D Kelleher & Brendan Tierney, "Data Science", 2018)

"A data scientist should be able to wrangle, mung, manipulate, and consolidate datasets before performing calculations on that data that help us to understand it. Analysis is a broad term, but it's clear that the end result is knowledge of your dataset that you didn't have before you started, no matter how basic or complex. [...] A data scientist usually has to be able to apply statistical, mathematical, and machine learning models to data in order to explain it or perform some sort of prediction." (Andrew P McMahon, "Machine Learning Engineering with Python", 2021)

"Data scientists are advanced in their technical skills. They like to do coding, statistics, and so forth. In its purest form, data science is where an individual uses the scientific method on data." (Jordan Morrow, "Be Data Literate: The data literacy skills everyone needs to succeed", 2021)

"The ideal data scientist is a multi-disciplinary person, persistent in pursuing the solution." (Anil Maheshwari, "Data Analytics Made Accessible", 2021)

"Overall [...] everyone also has a need to analyze data. The ability to analyze data is vital in its understanding of product launch success. Everyone needs the ability to find trends and patterns in the data and information. Everyone has a need to ‘discover or reveal (something) through detailed examination’, as our definition says. Not everyone needs to be a data scientist, but everyone needs to drive questions and analysis. Everyone needs to dig into the information to be successful with diagnostic analytics. This is one of the biggest keys of data literacy: analyzing data." (Jordan Morrow, "Be Data Literate: The data literacy skills everyone needs to succeed", 2021)

"A data scientist is someone who can obtain, scrub, explore, model and interpret data, blending hacking, statistics and machine learning. Data scientists not only are adept at working with data, but appreciate data itself as a first-class product." (Hillary Mason)

"A data scientist is someone who knows more statistics than a computer scientist and more computer science than a statistician." (Josh Blumenstock) [attributed]

"All businesses could use a garden where Data Scientists plant seeds of possibility and water them with collaboration." (Damian Mingle)

"Data scientist (noun): Person who is better at statistics than any software engineer and better at software engineering than any statistician." (Josh Wills)

"Data Scientists should recall innovation often times is not providing fancy algorithms, but rather value to the customer." (Damian Mingle)

"Data Scientists should refuse to be defined by someone else's vision of what's possible." (Damian Mingle)

14 December 2018

🔭Data Science: Algorithms (Just the Quotes)

"Mathematics is an aspect of culture as well as a collection of algorithms." (Carl B Boyer, "The History of the Calculus and Its Conceptual Development", 1959)

"Design problems - generating or discovering alternatives - are complex largely because they involve two spaces, an action space and a state space, that generally have completely different structures. To find a design requires mapping the former of these on the latter. For many, if not most, design problems in the real world systematic algorithms are not known that guarantee solutions with reasonable amounts of computing effort. Design uses a wide range of heuristic devices - like means-end analysis, satisficing, and the other procedures that have been outlined - that have been found by experience to enhance the efficiency of search. Much remains to be learned about the nature and effectiveness of these devices." (Herbert A Simon, "The Logic of Heuristic Decision Making", [in "The Logic of Decision and Action"], 1966)

"An algorithm must be seen to be believed, and the best way to learn what an algorithm is all about is to try it." (Donald E Knuth, The Art of Computer Programming Vol. I, 1968)

"Scientific laws give algorithms, or procedures, for determining how systems behave. The computer program is a medium in which the algorithms can be expressed and applied. Physical objects and mathematical structures can be represented as numbers and symbols in a computer, and a program can be written to manipulate them according to the algorithms. When the computer program is executed, it causes the numbers and symbols to be modified in the way specified by the scientific laws. It thereby allows the consequences of the laws to be deduced." (Stephen Wolfram, "Computer Software in Science and Mathematics", 1984)

"Algorithmic complexity theory and nonlinear dynamics together establish the fact that determinism reigns only over a quite finite domain; outside this small haven of order lies a largely uncharted, vast wasteland of chaos." (Joseph Ford, "Progress in Chaotic Dynamics: Essays in Honor of Joseph Ford's 60th Birthday", 1988)

"On this view, we recognize science to be the search for algorithmic compressions. We list sequences of observed data. We try to formulate algorithms that compactly represent the information content of those sequences. Then we test the correctness of our hypothetical abbreviations by using them to predict the next terms in the string. These predictions can then be compared with the future direction of the data sequence. Without the development of algorithmic compressions of data all science would be replaced by mindless stamp collecting - the indiscriminate accumulation of every available fact. Science is predicated upon the belief that the Universe is algorithmically compressible and the modern search for a Theory of Everything is the ultimate expression of that belief, a belief that there is an abbreviated representation of the logic behind the Universe's properties that can be written down in finite form by human beings." (John D Barrow, New Theories of Everything", 1991)

"Algorithms are a set of procedures to generate the answer to a problem." (Stuart Kauffman, "At Home in the Universe: The Search for Laws of Complexity", 1995)

"Let us regard a proof of an assertion as a purely mechanical procedure using precise rules of inference starting with a few unassailable axioms. This means that an algorithm can be devised for testing the validity of an alleged proof simply by checking the successive steps of the argument; the rules of inference constitute an algorithm for generating all the statements that can be deduced in a finite number of steps from the axioms." (Edward Beltrami, "What is Random?: Chaos and Order in Mathematics and Life", 1999)

"The vast majority of information that we have on most processes tends to be nonnumeric and nonalgorithmic. Most of the information is fuzzy and linguistic in form." (Timothy J Ross & W Jerry Parkinson, "Fuzzy Set Theory, Fuzzy Logic, and Fuzzy Systems", 2002)

"Knowledge is encoded in models. Models are synthetic sets of rules, and pictures, and algorithms providing us with useful representations of the world of our perceptions and of their patterns." (Didier Sornette, "Why Stock Markets Crash - Critical Events in Complex Systems", 2003)

"Swarm Intelligence can be defined more precisely as: Any attempt to design algorithms or distributed problem-solving methods inspired by the collective behavior of the social insect colonies or other animal societies. The main properties of such systems are flexibility, robustness, decentralization and self-organization." ("Swarm Intelligence in Data Mining", Ed. Ajith Abraham et al, 2006)

"The burgeoning field of computer science has shifted our view of the physical world from that of a collection of interacting material particles to one of a seething network of information. In this way of looking at nature, the laws of physics are a form of software, or algorithm, while the material world - the hardware - plays the role of a gigantic computer." (Paul C W Davies, "Laying Down the Laws", New Scientist, 2007)

"An algorithm refers to a successive and finite procedure by which it is possible to solve a certain problem. Algorithms are the operational base for most computer programs. They consist of a series of instructions that, thanks to programmers’ prior knowledge about the essential characteristics of a problem that must be solved, allow a step-by-step path to the solution." (Diego Rasskin-Gutman, "Chess Metaphors: Artificial Intelligence and the Human Mind", 2009)

"Programming is a science dressed up as art, because most of us don’t understand the physics of software and it’s rarely, if ever, taught. The physics of software is not algorithms, data structures, languages, and abstractions. These are just tools we make, use, and throw away. The real physics of software is the physics of people. Specifically, it’s about our limitations when it comes to complexity and our desire to work together to solve large problems in pieces. This is the science of programming: make building blocks that people can understand and use easily, and people will work together to solve the very largest problems." (Pieter Hintjens, "ZeroMQ: Messaging for Many Applications", 2012)

"These nature-inspired algorithms gradually became more and more attractive and popular among the evolutionary computation research community, and together they were named swarm intelligence, which became the little brother of the major four evolutionary computation algorithms." (Yuhui Shi, "Emerging Research on Swarm Intelligence and Algorithm Optimization", Information Science Reference, 2014)

"[...] algorithms, which are abstract or idealized process descriptions that ignore details and practicalities. An algorithm is a precise and unambiguous recipe. It’s expressed in terms of a fixed set of basic operations whose meanings are completely known and specified. It spells out a sequence of steps using those operations, with all possible situations covered, and it’s guaranteed to stop eventually." (Brian W Kernighan, "Understanding the Digital World", 2017)

"An algorithm is the computer science version of a careful, precise, unambiguous recipe or tax form, a sequence of steps that is guaranteed to compute a result correctly." (Brian W Kernighan, "Understanding the Digital World", 2017)

"Again, classical statistics only summarizes data, so it does not provide even a language for asking [a counterfactual] question. Causal inference provides a notation and, more importantly, offers a solution. As with predicting the effect of interventions [...], in many cases we can emulate human retrospective thinking with an algorithm that takes what we know about the observed world and produces an answer about the counterfactual world." (Judea Pearl & Dana Mackenzie, "The Book of Why: The new science of cause and effect", 2018)

"Algorithms describe the solution to a problem in terms of the data needed to represent the  problem instance and a set of steps necessary to produce the intended result." (Bradley N Miller et al, "Python Programming in Context", 2019)

"An algorithm, meanwhile, is a step-by-step recipe for performing a series of actions, and in most cases 'algorithm' means simply 'computer program'." (Tim Harford, "The Data Detective: Ten easy rules to make sense of statistics", 2020)

"Big data is revolutionizing the world around us, and it is easy to feel alienated by tales of computers handing down decisions made in ways we don’t understand. I think we’re right to be concerned. Modern data analytics can produce some miraculous results, but big data is often less trustworthy than small data. Small data can typically be scrutinized; big data tends to be locked away in the vaults of Silicon Valley. The simple statistical tools used to analyze small datasets are usually easy to check; pattern-recognizing algorithms can all too easily be mysterious and commercially sensitive black boxes." (Tim Harford, "The Data Detective: Ten easy rules to make sense of statistics", 2020)

"Each of us is sweating data, and those data are being mopped up and wrung out into oceans of information. Algorithms and large datasets are being used for everything from finding us love to deciding whether, if we are accused of a crime, we go to prison before the trial or are instead allowed to post bail. We all need to understand what these data are and how they can be exploited." (Tim Harford, "The Data Detective: Ten easy rules to make sense of statistics", 2020)

"Many people have strong intuitions about whether they would rather have a vital decision about them made by algorithms or humans. Some people are touchingly impressed by the capabilities of the algorithms; others have far too much faith in human judgment. The truth is that sometimes the algorithms will do better than the humans, and sometimes they won’t. If we want to avoid the problems and unlock the promise of big data, we’re going to need to assess the performance of the algorithms on a case-by-case basis. All too often, this is much harder than it should be. […] So the problem is not the algorithms, or the big datasets. The problem is a lack of scrutiny, transparency, and debate." (Tim Harford, "The Data Detective: Ten easy rules to make sense of statistics", 2020)

More quotes on "Algorithms" at the-web-of-knowledge.blogspot.com.

12 December 2018

🔭Data Science: Neural Networks (Just the Quotes)

"The terms 'black box' and 'white box' are convenient and figurative expressions of not very well determined usage. I shall understand by a black box a piece of apparatus, such as four-terminal networks with two input and two output terminals, which performs a definite operation on the present and past of the input potential, but for which we do not necessarily have any information of the structure by which this operation is performed. On the other hand, a white box will be similar network in which we have built in the relation between input and output potentials in accordance with a definite structural plan for securing a previously determined input-output relation." (Norbert Wiener, "Cybernetics: Or Control and Communication in the Animal and the Machine", 1948)

"A neural network is a massively parallel distributed processor that has a natural propensity for storing experiential knowledge and making it available for use. It resembles the brain in two respects: 1. Knowledge is acquired by the network through a learning process. 2. Interneuron connection strengths known as synaptic weights are used to store the knowledge." (Igor Aleksander, "An introduction to neural computing", 1990) 

"Neural Computing is the study of networks of adaptable nodes which through a process of learning from task examples, store experiential knowledge and make it available for use." (Igor Aleksander, "An introduction to neural computing", 1990)

"A neural network is characterized by (1) its pattern of connections between the neurons (called its architecture), (2) its method of determining the weights on the connections (called its training, or learning, algorithm), and (3) its activation function." (Laurene Fausett, "Fundamentals of Neural Networks", 1994)

"An artificial neural network is an information-processing system that has certain performance characteristics in common with biological neural networks. Artificial neural networks have been developed as generalizations of mathematical models of human cognition or neural biology, based on the assumptions that: (1) Information processing occurs at many simple elements called neurons. (2) Signals are passed between neurons over connection links. (3) Each connection link has an associated weight, which, in a typical neural net, multiplies the signal transmitted. (4) Each neuron applies an activation function (usually nonlinear) to its net input (sum of weighted input signals) to determine its output signal." (Laurene Fausett, "Fundamentals of Neural Networks", 1994)

"An artificial neural network (or simply a neural network) is a biologically inspired computational model that consists of processing elements (neurons) and connections between them, as well as of training and recall algorithms." (Nikola K Kasabov, "Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering", 1996)

"Many of the basic functions performed by neural networks are mirrored by human abilities. These include making distinctions between items (classification), dividing similar things into groups (clustering), associating two or more things (associative memory), learning to predict outcomes based on examples (modeling), being able to predict into the future (time-series forecasting), and finally juggling multiple goals and coming up with a good- enough solution (constraint satisfaction)." (Joseph P Bigus,"Data Mining with Neural Networks: Solving business problems from application development to decision support", 1996)

"More than just a new computing architecture, neural networks offer a completely different paradigm for solving problems with computers. […] The process of learning in neural networks is to use feedback to adjust internal connections, which in turn affect the output or answer produced. The neural processing element combines all of the inputs to it and produces an output, which is essentially a measure of the match between the input pattern and its connection weights. When hundreds of these neural processors are combined, we have the ability to solve difficult problems such as credit scoring." (Joseph P Bigus,"Data Mining with Neural Networks: Solving business problems from application development to decision support", 1996)

"Neural networks are a computing model grounded on the ability to recognize patterns in data. As a consequence, they have many applications to data mining and analysis." (Joseph P Bigus,"Data Mining with Neural Networks: Solving business problems from application development to decision support", 1996)

"Neural networks are a computing technology whose fundamental purpose is to recognize patterns in data. Based on a computing model similar to the underlying structure of the human brain, neural networks share the brains ability to learn or adapt in response to external inputs. When exposed to a stream of training data, neural networks can discover previously unknown relationships and learn complex nonlinear mappings in the data. Neural networks provide some fundamental, new capabilities for processing business data. However, tapping these new neural network data mining functions requires a completely different application development process from traditional programming." (Joseph P Bigus, "Data Mining with Neural Networks: Solving business problems from application development to decision support", 1996)

"The most familiar example of swarm intelligence is the human brain. Memory, perception and thought all arise out of the nett actions of billions of individual neurons. As we saw earlier, artificial neural networks (ANNs) try to mimic this idea. Signals from the outside world enter via an input layer of neurons. These pass the signal through a series of hidden layers, until the result emerges from an output layer. Each neuron modifies the signal in some simple way. It might, for instance, convert the inputs by plugging them into a polynomial, or some other simple function. Also, the network can learn by modifying the strength of the connections between neurons in different layers." (David G Green, "The Serendipity Machine: A voyage of discovery through the unexpected world of computers", 2004)

"A neural network is a particular kind of computer program, originally developed to try to mimic the way the human brain works. It is essentially a computer simulation of a complex circuit through which electric current flows." (Keith J Devlin & Gary Lorden, "The Numbers behind NUMB3RS: Solving crime with mathematics", 2007)

 "Neural networks are a popular model for learning, in part because of their basic similarity to neural assemblies in the human brain. They capture many useful effects, such as learning from complex data, robustness to noise or damage, and variations in the data set. " (Peter C R Lane, Order Out of Chaos: Order in Neural Networks, 2007)

"A network of many simple processors ('units' or 'neurons') that imitates a biological neural network. The units are connected by unidirectional communication channels, which carry numeric data. Neural networks can be trained to find nonlinear relationships in data, and are used in various applications such as robotics, speech recognition, signal processing, medical diagnosis, or power systems." (Adnan Khashman et al, "Voltage Instability Detection Using Neural Networks", 2009)

"An artificial neural network, often just called a 'neural network' (NN), is an interconnected group of artificial neurons that uses a mathematical model or computational model for information processing based on a connectionist approach to computation. Knowledge is acquired by the network from its environment through a learning process, and interneuron connection strengths (synaptic weighs) are used to store the acquired knowledge." (Larbi Esmahi et al, "Adaptive Neuro-Fuzzy Systems", 2009)

"Generally, these programs fall within the techniques of reinforcement learning and the majority use an algorithm of temporal difference learning. In essence, this computer learning paradigm approximates the future state of the system as a function of the present state. To reach that future state, it uses a neural network that changes the weight of its parameters as it learns." (Diego Rasskin-Gutman, "Chess Metaphors: Artificial Intelligence and the Human Mind", 2009)

"The simplest basic architecture of an artificial neural network is composed of three layers of neurons - input, output, and intermediary (historically called perceptron). When the input layer is stimulated, each node responds in a particular way by sending information to the intermediary level nodes, which in turn distribute it to the output layer nodes and thereby generate a response. The key to artificial neural networks is in the ways that the nodes are connected and how each node reacts to the stimuli coming from the nodes it is connected to. Just as with the architecture of the brain, the nodes allow information to pass only if a specific stimulus threshold is passed. This threshold is governed by a mathematical equation that can take different forms. The response depends on the sum of the stimuli coming from the input node connections and is 'all or nothing'." (Diego Rasskin-Gutman, "Chess Metaphors: Artificial Intelligence and the Human Mind", 2009)

"Neural networks can model very complex patterns and decision boundaries in the data and, as such, are very powerful. In fact, they are so powerful that they can even model the noise in the training data, which is something that definitely should be avoided. One way to avoid this overfitting is by using a validation set in a similar way as with decision trees.[...] Another scheme to prevent a neural network from overfitting is weight regularization, whereby the idea is to keep the weights small in absolute sense because otherwise they may be fitting the noise in the data. This is then implemented by adding a weight size term (e.g., Euclidean norm) to the objective function of the neural network." (Bart Baesens, "Analytics in a Big Data World: The Essential Guide to Data Science and Its Applications", 2014)

"A neural network consists of a set of neurons that are connected together. A neuron takes a set of numeric values as input and maps them to a single output value. At its core, a neuron is simply a multi-input linear-regression function. The only significant difference between the two is that in a neuron the output of the multi-input linear-regression function is passed through another function that is called the activation function." (John D Kelleher & Brendan Tierney, "Data Science", 2018)

"Just as they did thirty years ago, machine learning programs (including those with deep neural networks) operate almost entirely in an associational mode. They are driven by a stream of observations to which they attempt to fit a function, in much the same way that a statistician tries to fit a line to a collection of points. Deep neural networks have added many more layers to the complexity of the fitted function, but raw data still drives the fitting process. They continue to improve in accuracy as more data are fitted, but they do not benefit from the 'super-evolutionary speedup'."  (Judea Pearl & Dana Mackenzie, "The Book of Why: The new science of cause and effect", 2018)

"A neural-network algorithm is simply a statistical procedure for classifying inputs (such as numbers, words, pixels, or sound waves) so that these data can mapped into outputs. The process of training a neural-network model is advertised as machine learning, suggesting that neural networks function like the human mind, but neural networks estimate coefficients like other data-mining algorithms, by finding the values for which the model’s predictions are closest to the observed values, with no consideration of what is being modeled or whether the coefficients are sensible." (Gary Smith & Jay Cordes, "The 9 Pitfalls of Data Science", 2019)

"Deep neural networks have an input layer and an output layer. In between, are “hidden layers” that process the input data by adjusting various weights in order to make the output correspond closely to what is being predicted. [...] The mysterious part is not the fancy words, but that no one truly understands how the pattern recognition inside those hidden layers works. That’s why they’re called 'hidden'. They are an inscrutable black box - which is okay if you believe that computers are smarter than humans, but troubling otherwise." (Gary Smith & Jay Cordes, "The 9 Pitfalls of Data Science", 2019)

"Neural-network algorithms do not know what they are manipulating, do not understand their results, and have no way of knowing whether the patterns they uncover are meaningful or coincidental. Nor do the programmers who write the code know exactly how they work and whether the results should be trusted. Deep neural networks are also fragile, meaning that they are sensitive to small changes and can be fooled easily." (Gary Smith & Jay Cordes, "The 9 Pitfalls of Data Science", 2019)

"The label neural networks suggests that these algorithms replicate the neural networks in human brains that connect electrically excitable cells called neurons. They don’t. We have barely scratched the surface in trying to figure out how neurons receive, store, and process information, so we cannot conceivably mimic them with computers." (Gary Smith & Jay Cordes, "The 9 Pitfalls of Data Science", 2019)

More quotes on "Neural Networks" at the-web-of-knowledge.blogspot.com.

14 November 2018

🔭Data Science: There's No Free Lunch (Just the Quotes)

"There is nothing in any object, consider'd in itself, which can afford us a reason for drawing a conclusion beyond it; […] even after the observation of the frequent or constant conjunction of objects, we have no reason to draw any inference concerning any object beyond those of which we have had experience." (David Hume, "A Treatise of Human Nature", 1739) [considered as the first attempt to underline the limits of inductive inference] 

"Consider any of the heuristics that people have come up with for supervised learning: avoid overfitting, prefer simpler to more complex models, boost your algorithm, bag it, etc. The no free lunch theorems say that all such heuristics fail as often (appropriately weighted) as they succeed. This is true despite formal arguments some have offered trying to prove the validity of some of these heuristics." (David H Wolpert, "The lack of a priori distinctions between learning algorithms", Neural Computation Vol. 8(7), 1996)

"[...] an algorithm’s average performance is determined by how 'aligned' it is with the underlying probability distribution over optimization problems on which it is run." (David H Wolpert & William G Macready, "No free lunch theorems for optimization", IEEE Transactions on Evolutionary Computation 1 (1), 1997)

"[...] despite the NFL theorems, algorithms can have a priori distinctions that hold even if nothing is specified concerning the optimization problems. In particular, we show that there can be 'head-to-head' minimax distinctions between a pair of algorithms, i.e., that when considering one function at a time ,a pair of algorithms may be distinguishable, even if they are not when one looks over all functions." (David H Wolpert & William G Macready, "No free lunch theorems for optimization", IEEE Transactions on Evolutionary Computation 1 (1), 1997)

"The NFL theorems do not directly address minimax properties of search. For example, say we are considering two deterministic algorithms a1 and a2. It may very well be that there exist cost functions such that a1’s histogram is much better (according to some appropriate performance measure) than a2’s, but no cost functions for which the reverse is true. For the NFL theorem to be obeyed in such a scenario, it would have to be true that there are many more f for which a2’s histogram is better than a1’s than vice-versa, but it is only slightly better for all those f. For such a scenario, in a certain sense a1 has better 'head-to-head' minimax behavior than a2; there are f for which a1 beats a2 badly, but none for which a1 does substantially worse than a2." (David H Wolpert & William G Macready, "No free lunch theorems for optimization", IEEE Transactions on Evolutionary Computation 1 (1), 1997)

"[...] the NFL theorems mean that if an algorithm does particularly well on average for one class of problems then it must do worse on average over the remaining problems. In particular, if an algorithm performs better than random search on some class of problems then in must perform worse than random search on the remaining problems. Thus comparisons reporting the performance of a particular algorithm with a particular parameter setting on a few sample problems are of limited utility. While such results do indicate behavior on the narrow range of problems considered, one should be very wary of trying to generalize those results to other problems." (David H Wolpert & William G Macready, "No free lunch theorems for optimization", IEEE Transactions on Evolutionary Computation 1 (1), 1997)

"[...] if you have a general optimization involving uncertainty and very little prior knowledge, the situation is rather hopeless. Due to the NFL theorem, you cannot do any better than a blind search. Each blind search evaluation will be very expensive, with no hope of future improvement, theoretical or otherwise. And the number of performance searches required to get anywhere is simply too large. Neither time nor theoretical or technological progress are on your side. No grand optimization algorithm to end all algorithms is possible." (Yu-Chi Ho, "The no free lunch theorem and the human-machine interface", IEEE Control Systems Magazine, 1999)

"The No Free Lunch (NFL) theorem […] tells us that without any structural assumptions on an optimization problem, no algorithm can perform better on average than blind search." (Yu-Chi Ho, "The no free lunch theorem and the human-machine interface", IEEE Control Systems Magazine, 1999)

"[...] a general-purpose universal optimization strategy is theoretically impossible, and the only way one strategy can outperform another is if it is specialized to the specific problem under consideration." (Yu-Chi Ho & David L Pepyne, "Simple explanation of the no-free-lunch theorem and its implications", Journal of Optimization Theory and Applications 115, 2002)

"Because No Free Lunch is a well-known concept, researchers are increasingly interested in finding function classes that are well matched to their algorithms. The determination of an algorithm–class pairing is generally approximated by testing the algorithm on one of a number of benchmark functions, then generalizing the results by distilling various defining characteristics of those benchmarks." (Christopher K Monson, "No Free Lunch, Bayesian Inference, and Utility: A Decision-Theoretic Approach to Optimization", [thesis] 2006)

"Because No Free Lunch theorems dictate that no optimization algorithm can be considered more efficient than any other when considering all possible functions, the desired function class plays a prominent role in the model. In particular, this provides a tractable way to answer the traditionally difficult question of what algorithm is best matched to a particular class of functions. Among the benefits of the model are the ability to specify the function class in a straightforward manner, a natural way to specify noisy or dynamic functions, and a new source of insight into No Free Lunch theorems for optimization." (Christopher K Monson, "No Free Lunch, Bayesian Inference, and Utility: A Decision-Theoretic Approach to Optimization", [thesis] 2006)

"No Free Lunch dictates that any algorithm may be deceived, a difficulty to which the inference algorithm is not immune." (Christopher K Monson, "No Free Lunch, Bayesian Inference, and Utility: A Decision-Theoretic Approach to Optimization", [thesis] 2006)

"That the complexity of the problem is inherently tied to the flexibility of the representation of the prior serves to clarify part of No Free Lunch, especially the proof that problem difficulties cannot be ranked in the absence of a specific algorithm: it is not, in fact, the choice of algorithm that makes ranking possible among problems, but the choice of representation." (Christopher K Monson, "No Free Lunch, Bayesian Inference,and Utility: A Decision Theoretic Approach to Optimization", [thesis] 2006)

"The No Free Lunch work is a framework that addresses the core aspects of search, focusing on the connection between fitness functions and effective search algorithms. The central importance of this connection is demonstrated by the No Free Lunch theorem, which states that, averaged over all problems, all search algorithms perform equally. This result implies that if we are comparing a genetic algorithm to some other algorithm (e.g., simulated annealing, or even random search) and the genetic algorithm to some other algorithm (e.g., simulated annealing, or even random search) performs better for some class of problems, then the other algorithm necessarily performs better on problems outside the class. Thus it is essential to incorporate knowledge of the problem into the search algorithm." (S N Sivanandam & S N Deepa, "Introduction to Genetic Algorithms", 2008)

"A priori, it is clear that no method will always be the best [...]. However, it is reasonable to argue that each method will have a set of functions, a type of data, and a range of sample sizes for which it is optimal – a sort of catchment region for each procedure. Ideally, one could partition a space of regression problems into catchment regions, depending on which methods were under consideration, and determine which catchment region seemed most appropriate for each method. This ideal solution would amount to a selection principle for nonparametric methods. Unfortunately, it is unclear how to do this, not least because the catchment regions are unknown." (Bertrand Clarke et al, "Principles and Theory for Data Mining and Machine Learning", 2009)

"Methods perform well if the conditions of their derivation are met, with no method capable of covering all possible conditions. More strongly put, each good method has a domain on which it may be best, and different methods have different domains so the task is to characterize those domains and then figure out which domain a given problem’s solution is likely to be in. This of course, is extraordinarily difficult in its own right." (Bertrand Clarke et al, "Principles and Theory for Data Mining and Machine Learning", 2009)

"The well-known 'No Free Lunch' theorem indicates that there does not exist a pattern classification method that is inherently superior to any other, or even to random guessing without using additional information. It is the type of problem, prior information, and the amount of training samples that determine the form of classifier to apply. In fact, corresponding to different real-world problems, different classes may have different underlying data structures. A classifier should adjust the discriminant boundaries to fit the structures which are vital for classification, especially for the generalization capacity of the classifier." (Hui Xue et al, "SVM: Support Vector Machines", 2009)

"Another important fact, having impact on EA [Evolutionary Algorithm] use is so called No Free Lunch Theorem (NFLT) [...]. Main idea of this theorem is that there is no ideal algorithm which would be able to solve any problem. Simply, if there are for example two algorithms A and B, then for certain subset of possible problems is more suitable algorithms A and for another subset algorithm B. All those subsets can be of course totally disconnected, or/and overlapped." (Ivan Zelinka & Hendrik Richter, "Evolutionary Algorithms for Chaos Researchers", Studies in Computational Intelligence Vol. 267, 2010)

"Each learning algorithm dictates a certain model that comes with a set of assumptions. This inductive bias leads to error if the assumptions do not hold for the data. Learning is an ill-posed problem and with finite data, each algorithm converges to a different solution and fails under different circumstances. The performance of a learner may be fine-tuned to get the highest possible accuracy on a validation set, but this finetuning is a complex task and still there are instances on which even the best learner is not accurate enough. The idea is that there may be another base-learner learner that is accurate on these. By suitably combining multiple base learners then, accuracy can be improved." (Ethem Alpaydin, "Introduction to Machine Learning" 2nd Ed, 2010)

"The problem of comparing classifiers is not at all an easy task. There is no single classifier that works best on all given problems, phenomenon related to the 'No-free-lunch' metaphor, i.e., each classifier (’restaurant’) provides a specific technique associated with the corresponding costs (’menu’ and ’price’ for it). It is hence up to us, using the information and knowledge at hand, to find the optimal trade-off." (Florin Gorunescu, "Data Mining Concepts, Models and Techniques", 2011)

"As a consequence of the no free lunch theorem, we need to develop many different types of models, to cover the wide variety of data that occurs in the real world. And for each model, there may be many different algorithms we can use to train the model, which make different speed-accuracy-complexity tradeoffs." (Kevin P Murphy, "Machine Learning: A Probabilistic Perspective", 2012)

"Much of machine learning is concerned with devising different models, and different algorithms to fit them. We can use methods such as cross validation to empirically choose the best method for our particular problem. However, there is no universally best model - this is sometimes called the no free lunch theorem. The reason for this is that a set of assumptions that works well in one domain may work poorly in another." (Kevin P Murphy, "Machine Learning: A Probabilistic Perspective", 2012)

"The validity of NFL theorems largely depends on the validity of their fundamental assumptions. However, whether these assumptions are valid in practice is another question. Often, these assumptions are too stringent, and thus free lunches are possible." (Xin-She Yang, "Free Lunch or No Free Lunch: That is not Just a Question?", 2012)

"The idea of feature learning is to automate the process of finding a good representation of the input space. As mentioned before, the No-Free-Lunch theorem tells us that we must incorporate some prior knowledge on the data distribution in order to build a good feature representation." (Shai Shalev-Shwartz & Shai Ben-David, "Understanding Machine Learning: From Theory to Algorithms", 2014)

"We emphasize that while there are some common techniques for feature learning one may want to try, the No-Free-Lunch theorem implies that there is no ultimate feature learner. Any feature learning algorithm might fail on some problem. In other words, the success of each feature learner relies (sometimes implicitly) on some form of prior assumption on the data distribution. Furthermore, the relative quality of features highly depends on the learning algorithm we are later going to apply using these features." (Shai Shalev-Shwartz & Shai Ben-David, "Understanding Machine Learning: From Theory to Algorithms", 2014)

"Choosing an appropriate classification algorithm for a particular problem task requires practice: each algorithm has its own quirks and is based on certain assumptions. To restate the 'No Free Lunch' theorem: no single classifier works best across all possible scenarios. In practice, it is always recommended that you compare the performance of at least a handful of different learning algorithms to select the best model for the particular problem; these may differ in the number of features or samples, the amount of noise in a dataset, and whether the classes are linearly separable or not." (Sebastian Raschka, "Python Machine Learning", 2015)

"Learning theory claims that a machine learning algorithm can generalize well from a finite training set of examples. This seems to contradict some basic principles of logic. Inductive reasoning, or inferring general rules from a limited set of examples, is not logically valid. To logically infer a rule describing every member of a set, one must have information about every member of that set." (Ian Goodfellow et al, "Deep Learning", 2015)

"The no free lunch theorem for machine learning states that, averaged over all possible data generating distributions, every classification algorithm has the same error rate when classifying previously unobserved points. In other words, in some sense, no machine learning algorithm is universally any better than any other. The most sophisticated algorithm we can conceive of has the same average performance (over all possible tasks) as merely predicting that every point belongs to the same class. [...] the goal of machine learning research is not to seek a universal learning algorithm or the absolute best learning algorithm. Instead, our goal is to understand what kinds of distributions are relevant to the 'real world' that an AI agent experiences, and what kinds of machine learning algorithms perform well on data drawn from the kinds of data generating distributions we care about." (Ian Goodfellow et al, "Deep Learning", 2015)

"The no free lunch theorem implies that we must design our machine learning algorithms to perform well on a specific task. We do so by building a set of preferences into the learning algorithm. When these preferences are aligned with the learning problems we ask the algorithm to solve, it performs better." (Ian Goodfellow et al, "Deep Learning", 2015)

"The 'No free lunch' theorem demonstrates that it is not possible to find one algorithm behaving better for any problem. On the other hand, we know that we can work with different degrees of knowledge of the problem which we expect to solve, and that it is not the same to work without knowledge of the problem (hypothesis of the 'no free lunch' theorem) than to work with partial knowledge about the problem, knowledge that allows us to design algorithms with specific characteristics which can make them more suitable to solve of the problem." (Salvador García et al, "Data Preprocessing in Data Mining", 2015)

"Roughly stated, the No Free Lunch theorem states that in the lack of prior knowledge (i.e. inductive bias) on average all predictive algorithms that search for the minimum classification error (or extremum over any risk metric) have identical performance according to any measure." (N D Lewis, "Deep Learning Made Easy with R: A Gentle Introduction for Data Science", 2016)

"However, because ML algorithms are biased to look for different types of patterns, and because there is no one learning bias across all situations, there is no one best ML algorithm. In fact, a theorem known as the 'no free lunch theorem' states that there is no one best ML algorithm that on average outperforms all other algorithms across all possible data sets." (John D Kelleher & Brendan Tierney, "Data Science", 2018)

"The no free lunch theorems set limits on the range of optimality of any method. That is, each methodology has a ‘catchment area’ where it is optimal or nearly so. Often, intuitively, if the optimality is particularly strong then the effectiveness of the methodology falls off more quickly outside its catchment area than if its optimality were not so strong. Boosting is a case in point: it seems so well suited to binary classification that efforts to date to extend it to give effective classification (or regression) more generally have not been very successful. Overall, it remains to characterize the catchment areas where each class of predictors performs optimally, performs generally well, or breaks down." (Bertrand S Clarke & Jennifer L. Clarke, "Predictive Statistics: Analysis and Inference beyond Models", 2018)

"The No Free Lunch theorems prove that under a uniform distribution over induction problems (search problems or learning problems), all induction algorithms perform equally.  […] the importance of the theorems arises by using them to analyze scenarios involving non-uniform distributions, and to compare different algorithms, without any assumption about the distribution over problems at all. In particular, the theorems prove that anti-cross-validation (choosing among a set of candidate algorithms based on which has worst out-of-sample behavior) performs as well as cross-validation, unless one makes an assumption - which has never been formalized - about how the distribution over induction problems, on the one hand, is related to the set of algorithms one is choosing among using (anti-)cross validation, on the other. In addition, they establish strong caveats concerning the significance of the many results in the literature which establish the strength of a particular algorithm without assuming a particular distribution." (David H Wolpert, "What is important about the No Free Lunch theorems?", 2020)

"A well-known theorem called the 'no free lunch' theorem proves exactly what we anecdotally witness when designing and building learning systems. The theorem states that any bias-free learning system will perform no better than chance when applied to arbitrary problems. This is a fancy way of stating that designers of systems must give the system a bias deliberately, so it learns what’s intended. As the theorem states, a truly bias- free system is useless." (Erik J Larson, "The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do", 2021)

24 May 2018

🔬Data Science: Pattern Recognition (Definitions)

"The categorization of patterns in some domain into meaningful classes. A pattern usually has the form of a vector of measurement values." (Guido Deboeck & Teuvo Kohonen (Eds), "Visual Explorations in Finance with Self-Organizing Maps 2nd Ed.", 2000)

"in the most general sense the same as artificial perception." (Teuvo Kohonen, "Self-Organizing Maps" 3rd Ed., 2001)

"The operation and design of systems that recognize patterns in data." (Craig F Smith & H Peter Alesso, "Thinking on the Web: Berners-Lee, Gödel and Turing", 2008)

"Research area that enclose the development of methods and automatized techniques for identification and classification of samples in specific groups, in accordance with representative characteristics." (Paulo E Ambrósio, "Artificial Intelligence in Computer-Aided Diagnosis",  Encyclopedia of Artificial Intelligence, 2009)

"The process of identifying patterns in data via algorithms to make predictions within a subject area." (Jason Williamson, Getting a Big Data Job For Dummies, 2015)

"A branch of machine learning that recognizes and separates the patterns of one class from the other." (Mridusmita Sharma & Kandarpa K Sarma, "Soft-Computational Techniques and Spectro-Temporal Features for Telephonic Speech Recognition: An Overview and Review of Current State of the Art", 2016)

"A pattern is a particular configuration of data; for example, ‘A’ is a composition of three strokes. Pattern recognition is the detection of such patterns." (Ethem Alpaydın, "Machine learning : the new AI", 2016)

"Pattern Recognition in the discipline which tries to find the classes in the datasets of the various applications and it is the major building block of artificially intelligent systems." (Vandana M Ladwani, "Support Vector Machines and Applications", 2017)

"identifying patterns in data via algorithms to make predictions of new data coming from the same source." (Analytics Insight)

20 May 2018

🔬Data Science: Semi-supervised Learning (Definitions)

"machine learning technique that uses both labelled and unlabelled data for constructing the model." (Óscar Pérez & Manuel Sánchez-Montañés, "Class Prediction in Test Sets with Shifted Distributions", 2009)

"The set of learning algorithms in which the samples in training dataset are all unlabelled." (Jun Jiang & Horace H S Ip, "Active Learning with SVM, Encyclopedia of Artificial Intelligence", 2009) 

"Learning to label new data using both labeled training data plus unlabeled data." (Jesse Read & Albert Bifet, "Multi-Label Classification", 2014)

"A method of empirical concept learning from unlabeled data. The task is to build a model that finds groups of similar examples or that finds dependencies between attribute-value tuples." (Petr Berka, "Machine Learning", 2015)

"Combines the methodology of the supervised learning to process the labeled data with the unsupervised learning to compute the unlabeled data." (Nuno Pombo et al, "Machine Learning Approaches to Automated Medical Decision Support Systems", 2015)

"Estimation of the parameters of a model considering only un-labeled data and without the help of human experts." (Manuel Martín-Merino, "Semi-Supervised Dimension Reduction Techniques to Discover Term Relationships", 2015)

"In this category either the model is developed in such a way that either there are labels exist for all kind of observations or there is no label exist." (Neha Garg & Kamlesh Sharma, "Machine Learning in Text Analysis", 2020)

"It is a machine learning algorithm in which the machine learns from both labeled and unlabeled instances to build a model for predicting the class of unlabeled instances." (Gunjan Ansari et al, "Natural Language Processing in Online Reviews", 2021)

"Semi-supervised learning aims at labeling a set of unlabelled data with the help of a small set of labeled data." (Hari K Kondaveeti et al, "Deep Learning Applications in Agriculture: The Role of Deep Learning in Smart Agriculture", 2021)

"The semi-supervised learning combines both supervised and unsupervised learning algorithms." (M Govindarajan, "Big Data Mining Algorithms", 2021)

19 May 2018

🔬Data Science: Perceptron (Definitions)

"the term is often used to refer to a single layer pattern classification network with linear threshold units" (Laurene V Fausett, "Fundamentals of Neural Networks: Architectures, Algorithms, and Applications", 1994)

"adaptive element for multilayer feedforward networks introduced by Rosenblatt" (Teuvo Kohonen, "Self-Organizing Maps" 3rd Ed., 2001)

"An early theoretical model of the neuron developed by Rosenblatt (1958) that was the first to incorporate a learning rule. The term is also used as a generic label for all trained feed-forward networks, which is often referred to collectively as multilayer perceptron networks." (David Scarborough & Mark J Somers, "Neural Networks in Organizational Research: Applying Pattern Recognition to the Analysis of Organizational Behavior", 2006)

"A type of binary classifier that maps its inputs (a vector of real type) to an output value (a scalar real type). The perceptron may be considered as the simplest model of feed-forward neural network, as the inputs directly feeding the output units through weighted connections." )Crescenzio Gallo, "Artificial Neural Networks Tutorial", 2015)

"A perceptron is a type of a neural network organized into layers where each layer receives connections from units in the previous layer and feeds its output to the units of the layer that follow." (Ethem Alpaydın, "Machine learning : the new AI", 2016)

"Perceptron is a learning algorithm which is used to learn the decision boundary for linearly separable data." Vandana M Ladwani, "Support Vector Machines and Applications", 2017)

"A simple neural network model consisting of one unit and inputs with variable weights that can be trained to classify inputs into categories." (Terrence J Sejnowski, "The Deep Learning Revolution", 2018)

"The simplest form of artificial neural network, a basic operational unit which employs supervised learning. It is used to classify data into two classes." (Gaetano B Ronsivalle & Arianna Boldi, "Artificial Intelligence Applied: Six Actual Projects in Big Organizations", 2019)

"A perceptron is a single-layer neural network. It includes input values, weights and bias, net sum, and an activation function." (Prisilla Jayanthi & Muralikrishna Iyyanki, "Deep Learning Techniques for Prediction, Detection, and Segmentation of Brain Tumors", 2020)

"The basic unit of a neural network that encodes inputs from neurons of the previous layer using a vector of weights or parameters associated with the connections between perceptrons." Mário P Véstias, "Deep Learning on Edge: Challenges and Trends", 2020)

"these are machine learning algorithms that undertake supervised learning of functions called binary classifiers which decide whether or not an input, usually identified with a vector of numbers, belongs to a particular class." (Hari Kishan Kondaveeti et al, "Deep Learning Applications in Agriculture: The Role of Deep Learning in Smart Agriculture", 2021)

🔬Data Science: Convolutional Neural Network [CNN] (Definitions)

"A multi layer neural network similar to artificial neural networks only differs in its architecture and mainly built to recognize visual patterns from image pixels." (Nishu Garg et al, "An Insight Into Deep Learning Architectures, Latent Query Features", 2018)

"In machine learning, a convolutional neural network is a class of deep, feed-forward artificial neural networks that has successfully been applied to analyzing visual imagery. CNNs use a variation of multilayer perceptrons designed to require minimal preprocessing. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics." (V E Jayanthi, "Automatic Detection of Tumor and Bleed in Magnetic Resonance Brain Images", 2018)

"A special type of feed-forward neural network optimized for image data processing. The key features of CNN architecture include sharing weights, using pooling layers, implementing deep structures with multiple hidden layers." (Lyudmila N. Tuzova et al, "Teeth and Landmarks Detection and Classification Based on Deep Neural Networks", 2019)

"A type of artificial neural networks, which uses a set of filters with tunable (learnable) parameters to extract local features from the input data." (Sergei Savin & Aleksei Ivakhnenko, "Enhanced Footsteps Generation Method for Walking Robots Based on Convolutional Neural Networks", 2019) 

"A convolutional neural network (CNN) is a type of artificial neural network used in image recognition and processing that is specifically designed to process pixel data by means of learnable filters." (Loris Nanni et al, "Digital Recognition of Breast Cancer Using TakhisisNet: An Innovative Multi-Head Convolutional Neural Network for Classifying Breast Ultrasonic Images", 2020)

"A convolutional neural network (CNN) is a type of artificial neural network used in image recognition and processing that is specifically designed to process pixel data. CNNs are powerful image processing, artificial intelligence (AI) that use deep learning to perform both generative and descriptive tasks, often using machine vision that includes image and video recognition, along with recommender systems and natural language processing (NLP)." (Mohammad F Hashmi et al, "Subjective and Objective Assessment for Variation of Plant Nitrogen Content to Air Pollutants Using Machine Intelligence", 2020)

"A neural network with a convolutional layer which does the mathematical operation of convolution in addition to the other layers of deep neural network." (S Kayalvizhi & D Thenmozhi, "Deep Learning Approach for Extracting Catch Phrases from Legal Documents", 2020)

"A special type of neural networks used popularly to analyze photography and imagery." (Murad Al Shibli, "Hybrid Artificially Intelligent Multi-Layer Blockchain and Bitcoin Cryptology", 2020)

"In deep learning, a convolutional neural network is a class of deep neural networks, most commonly applied to analyzing visual imagery. CNNs use a variation of multilayer perceptrons designed to require minimal preprocessing." (R Murugan, "Implementation of Deep Learning Neural Network for Retinal Images", 2020)

"A class of deep neural networks applied to image processing where some of the layers apply convolutions to input data." (Mário P Véstias, "Convolutional Neural Network", 2021)

"A convolution neural network is a kind of ANN used in image recognition and processing of image data." (M Srikanth Yadav & R Kalpana, "A Survey on Network Intrusion Detection Using Deep Generative Networks for Cyber-Physical Systems", 2021)

"A multi-layer neural network similar to artificial neural networks only differs in its architecture and mainly built to recognize visual patterns from image pixels." (Udit Singhania & B K Tripathy, "Text-Based Image Retrieval Using Deep Learning", 2021) 

"A type of deep learning algorithm commonly applied in analyzing image inputs." (Jinnie Shin et al, "Automated Essay Scoring Using Deep Learning Algorithms", 2021)

"It is a class of deep neural networks, most commonly applied to analyzing visual imagery." (Sercan Demirci et al, "Detection of Diabetic Retinopathy With Mobile Application Using Deep Learning", 2021)

"They are a class of deep neural networks that are generally used to analyze image data. They use convolution instead of simple matrix multiplication in a few layers of the network. They have shared weights architecture and have translation invariant characteristics." Vijayaraghavan Varadharajan & J Rian Leevinson, "Next Generation of Intelligent Cities: Case Studies from Europe", 2021) 

18 May 2018

🔬Data Science: Boltzmann Machine (Definitions)

[Boltzmann machine (with learning):] "A net that adjusts its weights so that the equilibrium configuration of the net will solve a given problem, such as an encoder problem" (David H Ackley et al, "A learning algorithm for boltzmann machines", Cognitive Science Vol. 9 (1), 1985)

[Boltzmann machine (without learning):] "A class of neural networks used for solving constrained optimization problems. In a typical Boltzmann machine, the weights are fixed to represent the constraints of the problem and the function to be optimized. The net seeks the solution by changing the activations (either 1 or 0) of the units based on a probability distribution and the effect that the change would have on the energy function or consensus function for the net." (David H Ackley et al, "A learning algorithm for boltzmann machines", Cognitive Science Vol. 9 (1), 1985)

"neural-network model otherwise similar to a Hopfield network but having symmetric interconnects and stochastic processing elements. The input-output relation is optimized by adjusting the bistable values of its internal state variables one at a time, relating to a thermodynamically inspired rule, to reach a global optimum." (Teuvo Kohonen, "Self-Organizing Maps 3rd" Ed., 2001)

"A neural network model consisting of interacting binary units in which the probability of a unit being in the active state depends on its integrated synaptic inputs." (Terrence J Sejnowski, "The Deep Learning Revolution", 2018)

"An unsupervised network that maximizes the product of probabilities assigned to the elements of the training set." (Mário P Véstias, "Deep Learning on Edge: Challenges and Trends", 2020)

"Restricted Boltzmann machine (RBM) is an undirected graphical model that falls under deep learning algorithms. It plays an important role in dimensionality reduction, classification and regression. RBM is the basic block of Deep-Belief Networks. It is a shallow, two-layer neural networks. The first layer of the RBM is called the visible or input layer while the second is the hidden layer. In RBM the interconnections between visible units and hidden units are established using symmetric weights." (S Abirami & P Chitra, "The Digital Twin Paradigm for Smarter Systems and Environments: The Industry Use Cases", Advances in Computers, 2020)

"A deep Boltzmann machine (DBM) is a type of binary pairwise Markov random field (undirected probabilistic graphical model) with multiple layers of hidden random variables." (Udit Singhania & B. K. Tripathy, "Text-Based Image Retrieval Using Deep Learning",  2021) 

"A Boltzmann machine is a neural network of symmetrically connected nodes that make their own decisions whether to activate. Boltzmann machines use a straightforward stochastic learning algorithm to discover “interesting” features that represent complex patterns in the database." (DeepAI) [source]

"Boltzmann Machines is a type of neural network model that was inspired by the physical process of thermodynamics and statistical mechanics. [...] Full Boltzmann machines are impractical to train, which is one of the reasons why a limited form, called the restricted Boltzmann machine, is used." (Accenture)

"RBMs [Restricted Boltzmann Machines] are a type of probabilistic graphical model that can be interpreted as a stochastic artificial neural network. RBNs learn a representation of the data in an unsupervised manner. An RBN consists of visible and hidden layer, and connections between binary neurons in each of these layers. RBNs can be efficiently trained using Contrastive Divergence, an approximation of gradient descent." (Wild ML)

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.