• Category
  • >Deep Learning

A Guide to Transfer Learning in Deep Learning

  • Bhumika Dutta
  • Jan 06, 2022
A Guide to Transfer Learning in Deep Learning title banner

Transferring information is a relatively typical behavior that we witness in individuals when they learn new skills. It may be any task knowledge that humans use to accomplish other related tasks, and the more tasks that are connected, the easier it is for individuals to transfer or cross-use the information. 

 

In machine learning and deep learning, a similar notion is used to solve a problem using previously obtained knowledge from a related task. Let us learn about transfer learning in deep learning in this article.


 

Understanding Transfer Learning

 

Transfer learning is a machine learning technique in which a model created for one job is utilized as the basis for a model on a different task. 

 

Given the vast compute and time resources required to develop neural network models on these problems, as well as the huge jumps in the skill that they provide on related problems, it is a popular approach in deep learning where pre-trained models are used as the starting point on computer vision and natural language processing tasks.

 

 

What is a pre-trained model?

 

A stored network that was previously trained on a big dataset, generally on a large-scale image-classification job, is known as a pre-trained model. One can utilize the pre-trained model as is or apply transfer learning to tailor it to a specific job. 

 

Transfer learning for image classification is based on the idea that if a model is trained on a big and general enough dataset, it may successfully serve as a generic model of the visual world. The learned feature maps may then be used instead of having to start from scratch by training a huge model on a large dataset.

 

If we want to train a model for a task and domain A in the classic supervised learning scenario of machine learning, we expect that we will be given labeled data for the same task and domain. This is seen in the image below, where the goal and domain of our Model A's training and test data are the same.


Traditional supervised learning setup in ML

Traditional supervised learning setup in ML (source)


When we don't have enough labeled data for the task or area we care about to train a credible model, the classic supervised learning paradigm fails. We may use a model that has been trained on a comparable domain to train a model to recognize pedestrians on night-time photographs. 

 

In practice, however, we frequently see a drop in performance because the model has inherited the bias of its training data and is unable to generalize to the new domain. We can't even utilize an existing model to train a model to execute a new job since the labels between the tasks are different.

 

Transfer learning enables us to cope with these situations by utilizing labeled data from a comparable job or topic.

 

 

Customizing a pre-trained model for deep learning:

 

Tensor Flow mentions two ways to customize a pre-trained model, both of them are mentioned below:

 

  1. Feature Extraction: 

 

To repurpose the feature mappings learned before for the dataset, just put a new classifier on top of the pre-trained model, which will be trained from scratch.

 

The complete model does not need to be (re)trained. The fundamental convolutional network already has characteristics that may be used to identify images in general. The final, classification element of the pre-trained model, on the other hand, is particular to the original classification job and, as a result, to the collection of classes on which it was trained.

 

  1. Fine-Tuning: 

 

Unfreeze a few of the top layers of a frozen model base and train both the new classifier layers and the base model's final layers at the same time. This allows us to "fine-tune" the underlying model's higher-order feature representations to make them more relevant for the job at hand.

 

 

How to use Transfer Learning?

 

Two popular techniques for using transfer learning are described by machine learning masters.

 

  1. Pre-trained Model Approach: 

 

As previously said, pre-trained models are employed for transfer learning. From the available models, a pre-trained source model is picked. 

 

Many research institutes provide models based on vast and difficult datasets, which may be included in the pool of candidate models. The pre-trained model may then be utilized to build a model for the second job of interest. 

 

Depending on the modeling approach employed, this may include utilizing all or sections of the model. On the input-output pair data available for the job of interest, the model may need to be altered or enhanced.

 

  1. Develop Model Approach:

 

To use create a model strategy, choose a connected predictive modeling issue with a large amount of data where the input data, output data, and/or ideas learned throughout the mapping from input to output data have some link. The next step is to create a skilled model for the first task. 

 

To ensure that any feature learning has occurred, the model must be better than a naïve model. The model fit on the source job can then be utilized to build a model for the second task of interest. 

 

Depending on the modeling approach employed, this may include utilizing all or sections of the model. On the input-output pair data available for the job of interest, the model may need to be altered or enhanced.

 

 

Deep Transfer Learning techniques:

 

In recent years, deep learning has made significant progress. As a consequence, we've been able to tackle difficult situations and achieve incredible outcomes. Deep learning systems, on the other hand, require significantly more training time and data than typical machine learning systems. 

 

Deep learning networks with state-of-the-art performance have been designed and tested in a variety of applications, including computer vision and natural language processing (NLP). The specifics of these networks are usually shared by teams/people for others to use. 

 

Deep learning models are representative of what is also known as inductive learning. Inductive-learning algorithms have the goal of inferring a mapping from a series of training samples. 

 

In classification, for example, the model learns how to translate input characteristics to class labels. The algorithm of such a learner relies on a set of assumptions about the distribution of the training data for it to generalize successfully to new data. Inductive bias is the term for these types of assumptions. 

 

Multiple characteristics, such as the hypothesis space it confines to and the search procedure within the hypothesis space might describe inductive bias or preconceptions. As a result, these biases affect how and what the model learns on the given task and domain.

 

(Suggested read: How is Transfer Learning done in Neural Networks)

 

 

Applications of Transfer learning:

 

  1. Simulations are a relatively recent technological invention that is generating a lot of buzz. Gathering data and training a model in the real world is either expensive, time-consuming, or just too risky for many machine learning applications that rely on hardware for interaction. 

 

As a result, it is preferable to collect data less dangerously. The primary technique for this is a simulation, which is utilized to enable many complex ML systems in the real world. Transfer learning is when someone learns from a simulation and then applies what they’ve learned in the real world.

 

  1. Transfer learning is commonly used with natural language processing tasks that involve text as input or output. For these sorts of situations, word embedding is utilized, which is a mapping of words to a high-dimensional continuous vector space having identical vector representations for distinct words with similar meanings. 

 

To learn these distributed word representations, efficient techniques exist, and it is typical for research groups to provide pre-trained models that have been trained on very large corpora of text texts under a permissive license. These distributed word representation models may be downloaded and included into deep learning language models for either word interpretation as input or word production as output.

 

  1. Transfer learning is commonly used with predictive modeling issues that employ picture data as input. This might be a prediction job with images or video data as input. It is typical to employ a deep learning model that has been pre-trained for a big and difficult picture classification challenge, such as the ImageNet 

 

1000-class photograph classification competition, for these sorts of tasks. The research organizations that design models for this competition and place well typically make their final model available under a permissive license for others to use. 

 

On contemporary technology, these models can take days or weeks to train. These models may be downloaded and used to create new models that require picture data as input. 

 

Because the pictures were trained on a huge corpus of photographs, the model is required to generate predictions on a reasonably high number of classes, which necessitates that the model is efficient.

 

As concluded, transfer learning is an optimization, a quick way to save time or improve efficiency. In general, it is not clear whether or not utilizing transfer learning in the domain would be beneficial until after the model has been constructed and assessed. 

Latest Comments