深度学习算法索引(持续更新)

时间:2021-08-30 14:54:01

https://zhuanlan.zhihu.com/p/26004118

机器学习最近几年发展如同野兽出笼,网上的资料铺天盖地。学习之余建立一个索引,把最有潜力(不求最全)的机器学习算法、最好的教程、贴近工业界最前沿的开源代码收录其中。个人能力有限,希望知友指正和补充。

Model篇

1. Reinforcement Learning

领军人物:david silver

教程

2015年david silver的UCL Course on RL:Teaching

david silver的Tutorial: Deep Reinforcement Learning:

Deep Reinforcement Learning, Spring 2017课程:CS 294 Deep Reinforcement Learning, Spring 2017

david silver发表在nature上的alphago算法:Mastering the Game of Go with Deep Neural Networks and Tree Search

2014年Deterministic Policy Gradient Algorithms

ICLR 2016 deepmind发表的DDPG算法:CONTINUOUS CONTROL WITH DEEP REINFORCEMENT LEARNING,

2015年Deep Reinforcement Learning with Double Q-learning

2015年Massively Parallel Methods for Deep Reinforcement ... Parallel Methods for Deep Reinforcement Learning

2016年PRIORITIZED EXPERIENCE REPLAY

2016年DUELING NETWORK ARCHITECTURES FOR DEEP REINFORCEMENT LEARNING

2017年 Value Iteration Networks

博客

【David Silver强化学习公开课之一】强化学习入门

AlphaGo的分析 - 知乎专栏

深度 | David Silver全面解读深度强化学习:从基础概念到AlphaGo

重磅 | Facebook 田渊栋详解:深度学习如何进行游戏推理?

2. GAN

领军人物:Ian Goodfellow

2014年Ian提出GAN:[1406.2661] Generative Adversarial Networks

2017年WGAN:Wasserstein GAN 以及源码实现martinarjovsky/WassersteinGAN

reddit讨论:[R] [1701.07875] Wasserstein GAN • r/MachineLearning

2017年DCGAN:[1511.06434] Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

2017年Google Brain的AdaGAN:

博客

令人拍案叫绝的Wasserstein GAN - 知乎专栏

3. Deep Learning

领军人物:Hinton, lecun, bengio三巨头

教程

Google首席研发科学家Vincent Vanhoucke主讲的课程,浅显易懂。从机器学习到深度学习(Udacity

@

卷积神经网路的tricks:A guide to convolution arithmetic for deep learning(

Ian Goodfellow and Yoshua Bengio and Aaron Courville写的Deep learning书:Deep Learning

4. RNN和LSTM

领军人物:Alex Graves,他的个人主页Home Page of Alex Graves

博客

有哪些LSTM(Long Short Term Memory)和RNN(Recurrent)网络的教程?

5. Attention Model (Encoder-Decoder框架)

领军人物:?

Neural Machine Translation by Jointly Learning to Align and Translate(Yoshua Bengio):[1409.0473] Neural Machine Translation by Jointly Learning to Align and Translate

Encoding Source Language with Convolutional Neural Network for Machine Translation(Li Hang):

Survey on Attention-based Models Applied in NLP

Attention based model 是什么,它解决了什么问题?@Tao Lei 的回答。

Sequence to Sequence Learning with Neural Networks以及源码

A Neural Attention Model for Abstractive Sentence Summarization

博客

自然语言处理中的Attention Model:是什么及为什么

以Attention Model为例谈谈两种研究创新模式

源码篇

1. DMLC

Distributed (Deep) Machine Learning Community

2. tensorflow

tensorflow/tensorflow

3. Caffe/Caffe2

Caffe | Deep Learning Framework

caffe2/caffe2

4. 微软的开源

Microsoft/LightGBM

Microsoft/LightLDA

5. Facebook

计算机围棋开源程序:facebookresearch/darkforestGo,负责人@田渊栋

应用篇

Deep Reinforcement Learning应用于Go,也就是AlphaGo。

Youtube视频推荐:Deep Neural Networks for YouTube Recommendations

Google的CTR预估model:Wide & Deep Learning for Recommender Systems,开源代码

微软的DSSM模型:DSSM can be used to develop latent semantic models that project entities of different types (e.g., queries and documents) into a common low-dimensional semantic space for a variety of machine learning tasks such as ranking and classification. DSSM - Microsoft Research

~~ 20170416 更新 ~~

position bias 优化:Position-Normalized Click Prediction in Search Advertising

机器学习深度学习(Deep Learning)强化学习 (Reinforcement Learning)