文末附课程已经Release出来的视频课程及PPT获取地址。
深度学习最近的壮观成就纯粹是经验性的。不过,知识分子总是试图从理论上解释深度学习取得这些重要成就的原因。
在本课程中,我们将回顾Bruna和Mallat,Mhaskar和Poggio,Papyan和Elad,Bolcskei和他的合着者,Baraniuk和他的合着者等人的最近的所做的工作,尝试建立一套理论框架,以解释深度学习所取得的这些成绩。
首先,我们会讲解一些基本的理论知识。然后,我们会邀请一些作者就特定的论文进行讲座。本课程每周举行一次。
课程主页:https://stats385.github.io/
课程大纲及阅读材料列表:
Lecture 1 – Deep Learning Challenge. Is There Theory?
Readings
1. Deep Deep Trouble
2. Why 2016 is The Global Tipping Point...
3. Are AI and ML Killing Analyticals...
4. The Dark Secret at The Heart of AI
5. AI Robots Learning Racism...
6. FaceApp Forced to Pull ‘Racist' Filters...
7. Losing a Whole Generation of Young Men to Video Games
Lecture 2 – Overview of Deep Learning From a Practical Point of View
Readings
1. Emergence of simple cell
2. ImageNet Classification with Deep Convolutional Neural Networks (Alexnet)
3. Very Deep Convolutional Networks for Large-Scale Image Recognition (VGG)
4. Going Deeper with Convolutions (GoogLeNet)
5. Deep Residual Learning for Image Recognition (ResNet)
6. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
7. Visualizing and Understanding Convolutional Neural Networks
Lecture 3
Readings
1. A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction
2. Energy Propagation in Deep Convolutional Neural Networks
3. Discrete Deep Feature Extraction: A Theory and New Architectures
4. Topology Reduction in Deep Convolutional Feature Extraction Networks
Lecture 4
Readings
1. A Probabilistic Framework for Deep Learning
2. Semi-Supervised Learning with the Deep Rendering Mixture Model
3. A Probabilistic Theory of Deep Learning
Lecture 5
Readings
1. Why and When Can Deep-but Not Shallow-networks Avoid the Curse of Dimensionality: A Review
2. Learning Functions: When is Deep Better Than Shallow
Lecture 6
Readings
1. Convolutional Patch Representations for Image Retrieval: an Unsupervised Approach
2. Convolutional Kernel Networks
3. Kernel Descriptors for Visual Recognition
4. End-to-End Kernel Learning with Supervised Convolutional Kernel Networks
5. Learning with Kernels
6. Kernel Based Methods for Hypothesis Testing
Lecture 7
Readings
1. Geometry of Neural Network Loss Surfaces via Random Matrix Theory
2. Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice
3. Nonlinear random matrix theory for deep learning
Lecture 8
Readings
1. Deep Learning without Poor Local Minima
2. Topology and Geometry of Half-Rectified Network Optimization
3. Convexified Convolutional Neural Networks
4. Implicit Regularization in Matrix Factorization
Lecture 9
Readings
1. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position
2. Perception as an inference problem
3. A Neurobiological Model of Visual Attention and Invariant Pattern Recognition Based on Dynamic Routing of Information
Lecture 10
Readings
1. Working Locally Thinking Globally: Theoretical Guarantees for Convolutional Sparse Coding
2. Convolutional Neural Networks Analyzed via Convolutional Sparse Coding
3. Multi-Layer Convolutional Sparse Modeling: Pursuit and Dictionary Learning
4. Convolutional Dictionary Learning via Local Processing
课程在线视频链接:
https://www.bilibili.com/video/av16136625/
课程ppt下载地址:
链接: https://pan.baidu.com/s/1slyphil
密码: hrph
往期精彩内容推荐:
国立*大学-李宏毅-2017年(秋)最新深度学习与机器学习应用及其深入和结构化研究课程分享
模型汇总18 强化学习(Reinforcement Learning)基础介绍
纯干货15 48个深度学习相关的平台和开源工具包,一定有很多你不知道的!!!
斯坦福大学2017年春季_基于卷积神经网络的视觉识别课程视频教程及ppt分享
斯坦福大学2017年-Spring-最新强化学习(Reinforcement Learning)课程分享
<模型汇总-6>堆叠自动编码器Stacked_AutoEncoder-SAE
<视频教程-2>生成对抗网络GAN视频教程part6-完整版
更多深度学习在NLP方面应用的经典论文、实践经验和最新消息,欢迎关注微信公众号“深度学习与NLP”或“DeepLearning_NLP”或扫描二维码添加关注。