LISTENDOCK

PDF TO MP3

Example16 min9 chapters9 audios readyExplained0% complete

How transferable are features in deep neural networks?

This paper investigates the transferability of features across layers in deep neural networks, quantifying their generality and specificity, and identifying factors that affect performance degradation during transfer.

Abstract

Features in deep neural networks exhibit varying degrees of generality and specificity across layers, with transferability impacted by neuron specialization and optimization difficulties.

2:09Explained

Introduction

This paper investigates the transition from general to specific features in deep neural networks, quantifying layer-wise transferability and exploring its implications for transfer learning.

2:10Explained

Generality vs. Specificity Measured as Transfer Performance

Generality of learned features is defined and measured by their performance when transferred to a new task, using pairs of ImageNet subsets to create similar and dissimilar tasks.

2:08Explained

Experimental Setup

A standard Caffe implementation is used for experiments to study feature transferability on a well-known convolutional network architecture.

1:17Explained

Results and Discussion

Experiments reveal that feature transferability decreases with layer depth due to specialization and optimization challenges, but fine-tuning transferred features can improve generalization.

2:13Explained

Similar Datasets: Random A/B splits

On similar datasets, first and second layer features transfer well, but performance degrades in deeper layers due to a combination of co-adaptation loss and feature specificity.

1:27Explained

Dissimilar Datasets: Splitting Man-made and Natural Classes Into Separate Datasets

Transfer performance significantly declines with increasing task dissimilarity, especially for higher layers, indicating that feature specificity becomes more dominant.

1:30Explained

Random Weights

Random weights in deeper layers of a convolutional network yield near-chance performance, and even distant task transfers outperform random weights.

1:38Explained

Conclusions

Feature transferability quantifies generality and specificity, showing that optimization difficulties and task specialization impact transfer, while even distant features and fine-tuning offer performance benefits.

1:25Explained

Share this document