IT数码 购物 网址 头条 软件 日历 阅读 图书馆
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
图片批量下载器
↓批量下载图片,美女图库↓
图片自动播放器
↓图片自动播放器↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁
 
   -> 人工智能 -> Papers Notes_3_ GoogLeNet--Going deeper with convolutions -> 正文阅读

[人工智能]Papers Notes_3_ GoogLeNet--Going deeper with convolutions

Architecture

Inception module

在这里插入图片描述

  1. parallel conv. with different kernel size
    visual information should be processed at various scales and then aggregated
    →the next stage can abstract features from different scales simultaneously
  2. max pooling
    pooling operations have been essential for the success in current state of the art convolutional networks
  3. 1×1 conv.
    ① dimension reduction→remove computational bottlenecks
    ② increase the representational power
    refer to Network-in-Network

GoogLeNet

在这里插入图片描述
i.e. inception(3a)+(3b):
在这里插入图片描述
all conv. use ReLU
input→224×224 taking RGB channels with mean subtraction
#3×3 reduce→1×1filters in the reduction layer used before the 3×3 conv.
linear→FC
add auxiliary classifiers connected to intermediate layers
→encourage discrimination in lower stages in the classifier, increase the gradient signal that gets propagated back, provide additional regularization
put on top of output of the Inception(4a) and (4d) modules
during training, their loss gets added to the total loss of the network with a sidcount weight(0.3)
the structure:
在这里插入图片描述

Training

asynchronous stochastic gradient descent with 0.9 momentum, fixed learning rate schedule(decreasing the learning rate by 4% every 8 epochs)
image sampling method work well
sampling of various sized patches of the image whose size is distributed evenly between 8% and 100% of the image area and whose aspect radio is chosen randomly between 3/4 and 4/3

testing

resize the image to 4 scales where the shorter dimense is 256, 288, 320 and 352 respectively
take the left, center and right square of resized images (portrait-top, center and bottom)
for each square, take the 4 corners and center 224×224 crop +square resized to 224×224+mirrored version
→4×3×6×2=144 crops per image
note: such aggressive cropping may not be necessary in real application

Appendix

Network In Network

convolution filter in CNN is a generalized linear model(GLM) for the underlying data patch
replace the GLM with more potent nonlinear function approximator (in this paper, multilayer perceptron) can enhance the abstraction ability of the local model
在这里插入图片描述
The mlpconv maps the input local patch to the output feature vector with a multilayer perceptron (MLP) consisting of multiple fully connected layers with nonlinear activation functions
the calculation performed by mlpconv layer:
f i , j , k 1 1 = m a x ( w k 1 1 T x i , j + b k 1 , 0 ) f_{i,j,k_1}^1=max({w_{k_1}^1}^Tx_{i,j}+b_{k_1},0) fi,j,k1?1?=max(wk1?1?Txi,j?+bk1??,0)

f i , j , k n n = m a x ( w k 1 n T f i , j n ? 1 + b k n , 0 ) f_{i,j,k_n}^n=max({w_{k_1}^n}^Tf_{i,j}^{n-1}+b_{k_n},0) fi,j,kn?n?=max(wk1?n?Tfi,jn?1?+bkn??,0)
n→the number of layers in the multilayer perceptron
x i , j x_{i,j} xi,j?→the input patch centered at location (i,j)
Notes:
different location(i,j) in first layer use the same w k 1 w_{k_1} wk1??, so it is different with the normal FC layer
i think it is exactly conv. layer, a neuron can be regarded as a channel
"The cross channel parametric pooling layer is also equivalent to a convolution layer with 1x1 convolution kernel"→i think it is similar idea
on the internet, i found NIN structure is implemented by the stack of conv. layers
first one is n×n, later is 1×1 kernel
so, why authors of this paper use “multiple fully connected layers”? i am confused about it…
more confusing→there is no conv. layer before the 1×1 conv. layer in GoogLeNet, different with the structure of NIN. how can 1×1 conv. layer increase the representational power?

More Confusion

in addition to the question i mentioned above, i have plenty of confusion about this paper, especially the part of “3 Motivation and High Level Consideration”.
crying cat.jpg
The Inception architecture→approximate a sparse structure implied by Provable Bounds for Learning Some Deep Representations
Their main result states that if the probability distribution of the data-set is representable by a large, very sparse deep neural network, then the optimal network topology can be constructed layer by layer by analyzing the correlation statistics of the activations of the last layer and clustering neurons with highly correlated outputs.
however, i have not found “the optimal network topology” in that paper. what’s more, i can not find the relation of the inception module and sparsity.

References

Going deeper with convolutions
Network in Network
Provable Bounds for Learning Some Deep Representations

  人工智能 最新文章
2022吴恩达机器学习课程——第二课(神经网
第十五章 规则学习
FixMatch: Simplifying Semi-Supervised Le
数据挖掘Java——Kmeans算法的实现
大脑皮层的分割方法
【翻译】GPT-3是如何工作的
论文笔记:TEACHTEXT: CrossModal Generaliz
python从零学(六)
详解Python 3.x 导入(import)
【答读者问27】backtrader不支持最新版本的
上一篇文章      下一篇文章      查看所有文章
加:2021-07-24 00:07:23  更:2021-07-24 00:07:44 
 
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁

360图书馆 购物 三丰科技 阅读网 日历 万年历 2024年12日历 -2024/12/22 16:01:37-

图片自动播放器
↓图片自动播放器↓
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
图片批量下载器
↓批量下载图片,美女图库↓
  网站联系: qq:121756557 email:121756557@qq.com  IT数码