05 October 2025

🖍️Qiang Yang - Collected Quotes

"In a nutshell, transfer learning refers to the machine learning paradigm in which an algorithm extracts knowledge from one or more application scenar-ios to help boost the learning performance in a target scenario. Compared to tra-ditional machine learning, which requires large amounts of well-defined training data as the input, transfer learning can be understood as a new learning paradigm." (Qiang Yang et al, "Transfer Learning", 2020)

"[...] in machine learning practice, we observe that we are often surrounded with lots of small-sized data sets, which are often isolated and fragmented. Many organizations do not have the ability to collect a huge amount of big data due to a number of constraints that range from resource limitations to organizations inter-ests, and to regulations and concerns for user privacy. This small-data challenge is a serious problem faced by many organizations applying AI technology to their problems. Transfer learning is a suitable solution for addressing this challenge be-cause it can leverage many auxiliary data and external models, and adapt them to solve the target problems." (Qiang Yang et al, "Transfer Learning", 2020)

"Latent factor analysis is a statistical method that describes observed variables and their relationship in terms of a potentially fewer number of unobserved variables called latent factors. The general idea behind latent factor analysis for heterogeneous transfer learning is to extract latent factors shared by a source and a target domain, given observed feature representations of both domains. By projecting a target domain onto the latent space where the shared latent factors lie, the feature representation of the target domain is enriched with these shared la-tent factors that encode knowledge from one or multiple source domains, and improve the performance in kinds of tasks." (Qiang Yang et al, "Transfer Learning", 2020)

"Model-based transfer learning, also known as parameter-based transfer learn-ing, assumes that the source task and the target task share some common knowl-edge in the model level. That means the transferred knowledge is encoded into model parameters, priors or model architectures. Therefore, the goal of model-based transfer learning is to discover what part of the model learned in the source domain can help the learning of the model for target domain." (Qiang Yang et al, "Transfer Learning", 2020)

"[...] similar to supervised learning, the problem of insufficient data also haunts the performance of learning models on relational domains. When the relational domain changes, the learned model usually performs poorly and has to be rebuilt from scratch. Beside the low quantities of high-quality data instances, the available relations may also be too scarce to learn an accurate model, especially when there are many kinds of relations. So transfer learning is suitable for relational learning to overcome the reliance on large quantities of high-quality data by leveraging useful information from other related domains, leading to relation-based transfer learning. In addition, relation-based transfer learning can speed up the learning process in the target domain and hence improve the efficiency." (Qiang Yang et al, "Transfer Learning", 2020)

"Transfer learning and machine learning are closely related. On one hand, the aim of transfer learning encompasses that of machine learning in that its key ingredient is 'generalization'. In other words, it explores how to develop general and robust machine learning models that can apply to not only the training data, but also unanticipated future data. Therefore, all machine learning models should have the ability to conduct transfer learning. On the other hand, transfer learning differs from other branches of machine learning in that transfer learning aims to generalize commonalities across different tasks or domains, which are 'sets' of instances, while machine learning focuses on generalize commonalities across 'instances'. This difference makes the design of the learning algorithms quite different." (Qiang Yang et al, "Transfer Learning", 2020)

"[...] transfer learning can make AI and machine learning systems more reliable and robust. It is often the case that, when building a machine learning model, one cannot foresee all future situations. In machine learning, this problem is of-ten addressed using a technique known as regularization, which leaves room for future changes by limiting the complexity of the models. Transfer learning takes this approach further, by allowing the model to be complex while being prepared for changes when they actually come." (Qiang Yang et al, "Transfer Learning", 2020)

"Transfer learning deals with how systems can quickly adapt themselves to new situations, new tasks and new environments. It gives machine learning systems the ability to leverage auxiliary data and models to help solve target problems when there is only a small amount of data available in the target domain. This makes such systems more reliable and robust, keeping the machine learning model faced with unforeseeable changes from deviating too much from expected performance. At an enterprise level, transfer learning allows knowledge to be reused so experience gained once can be repeatedly applied to the real world." (Qiang Yang et al, "Transfer Learning", 2020)

"[...] transfer learning makes use of not only the data in the target task domain as input to the learning algorithm, but also any of the learning process in the source domain, including the training data, models and task description." (Qiang Yang et al, "Transfer Learning", 2020)

No comments:

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 25 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.