keras框架为我们提供了一些常用的内置数据集。比如,图像识别领域的手写识别MNIST数据集、文本分类领域的电影影评imdb数据集等等。这些数据库可以用一条代码就可以调用:
from keras.datasets import mnist
from keras.datasets import imdb
(x_train, y_train), (x_test, y_test) = mnist.load_data()
(x_train, y_train), (x_test, y_test) = imdb.load_data()
//load_data方法的源码如下
path = get_file(path,
origin='https://s3.amazonaws.com/text-datasets/imdb.npz',
file_hash='599dadb1135973df5b59232a0e9a887c')
with np.load(path) as f:
x_train, labels_train = f['x_train'], f['y_train']
x_test, labels_test = f['x_test'], f['y_test']
这些数据集是通过 url = https://s3.amazonaws.com进行下载的,但是目前这个网站不能用了,这些数据也使用不了。我们可以通过另一种方式去使用:先下载到本地,在本地调用就好。
keras数据集分享百度云:https://pan.baidu.com/s/1aZRp0uMkNj2QEWYstaNsKQ
提取码: 3a2u
还用很多其他数据集如图:
数据集相关介绍可参考(转载):https://blog.csdn.net/qq_37879432/article/details/78557234
下载后放在C:\Users\Administrator\.keras\datasets文件夹下
//keras会自动找到数据集/.keras/dataset在这个路径下
def load_data(path='imdb.npz', num_words=None, skip_top=0,
maxlen=None, seed=113,
start_char=1, oov_char=2, index_from=3, **kwargs):
"""Loads the IMDB dataset.
# Arguments
path: where to cache the data (relative to `~/.keras/dataset`).
num_words: max number of words to include. Words are ranked
by how often they occur (in the training set) and only
the most frequent words are kept
结束。