文件名称:理解__概括__传递.zip
文件大小:47.1MB
文件格式:ZIP
更新时间:2020-03-14 13:52:16
深度学习
在神经网络中提取知识 Distilling the knowledge in a neural network (2015) 作者 G. Hinton et al. 摘要:一个很简单的能改善几乎所有机器学习算法表现的办法,就是训练许多基于相同数据集的模型,并取这些模型的预测平均值。问题是,使用全部模型来进行预测是个笨办法,且允许大量用户部署的计算成本过于昂贵,特别是当个体模型是大规模神经网络时。Caruana和他的合作者已经论证,有可能将一个集合中的知识压缩到一个单独模型中,部署起来也容易得多,而且我们使用了不同压缩技巧进一步扩展了这一方法。在MNIST上,我们取得了一些令人吃惊的成功,并展示了可以显著改善一个重度使用商业系统的声学模型,方法就是将集合中的知识概括进一个单独模型。我们也介绍了一个新型集合,由一个或更多的全模型以及许多学会了区分识别细粒度类别(全模型做不到)的专家模型组成,可以对这些专家模型进行快速、并行训练。 深度神经网络容易被骗:高信度预测无法识别的图片 Deep neural networks are easily fooled: High confidence predictions for unrecognizable images (2015) 作者A. Nguyen et al. 深度神经网络特征的可迁移性如何? How transferable are features in deep neural networks? (2014) 作者J. Yosinski et al. 卷积神经网络现成的一些特性,对识别来说是令人惊奇的起点 CNN features off-the-Shelf: An astounding baseline for recognition (2014) 作者 A. Razavian et al. 使用卷积神经网络学习和迁移中层图像表征 Learning and transferring mid-Level image representations using convolutional neural networks (2014) 作者M. Oquab et al. 卷积网络的可视化和理解 Visualizing and understanding convolutional networks (2014) 作者 M. Zeiler and R. Fergus DeCAF:一个应用于通用视觉识别的深度卷积激活特征 Decaf: A deep convolutional activation feature for generic visual recognition (2014) 作者 J. Donahue et al.
【文件预览】:
理解__概括__传递
----._How transferable are features in deep neural networks__ (2014), J. Yosinski et al..pdf(4KB)
----._Decaf- A deep convolutional activation feature for generic visual recognition (2014), J. Donahue et al.pdf(4KB)
----._Distilling the knowledge in a neural network (2015), G. Hinton et al..pdf(4KB)
----.DS_Store(10KB)
----._Deep neural networks are easily fooled High confidence predictions for unrecognizable images (2015), A. Nguyen et al.pdf(4KB)
----._Visualizing and understanding convolutional networks (2014), M. Zeiler and R. Fergus.pdf(4KB)
----CNN features off-the-Shelf- An astounding baseline for recognition (2014), A. Razavian et al..pdf(293KB)
----How transferable are features in deep neural networks__ (2014), J. Yosinski et al..pdf(427KB)
----._.DS_Store(4KB)
----Decaf- A deep convolutional activation feature for generic visual recognition (2014), J. Donahue et al.pdf(3.21MB)
----Visualizing and understanding convolutional networks (2014), M. Zeiler and R. Fergus.pdf(34.56MB)
----._CNN features off-the-Shelf- An astounding baseline for recognition (2014), A. Razavian et al..pdf(4KB)
----Distilling the knowledge in a neural network (2015), G. Hinton et al..pdf(104KB)
----._Learning and transferring mid-Level image representations using convolutional neural networks (2014), M. Oquab et al..pdf(4KB)
----Deep neural networks are easily fooled High confidence predictions for unrecognizable images (2015), A. Nguyen et al.pdf(9.52MB)
----Learning and transferring mid-Level image representations using convolutional neural networks (2014), M. Oquab et al..pdf(1.5MB)