文件名称:tf.keras.datasets数据源
文件大小:700.23MB
文件格式:ZIP
更新时间:2023-08-23 15:24:13
Keras
boston_housing module: Boston housing price regression dataset. cifar10 module: CIFAR10 small images classification dataset. cifar100 module: CIFAR100 small images classification dataset. fashion_mnist module: Fashion-MNIST dataset. imdb module: IMDB sentiment classification dataset. mnist module: MNIST handwritten digits dataset. reuters module: Reuters topic classification dataset. import tensorflow as tf from tensorflow import keras fashion_mnist = keras.datasets.fashion_mnist (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data() mnist = keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() cifar100 = keras.datasets.cifar100 (x_train, y_train), (x_test, y_test) = cifar100.load_data() cifar10 = keras.datasets.cifar10 (x_train, y_train), (x_test, y_test) = cifar10.load_data() imdb = keras.datasets.imdb (x_train, y_train), (x_test, y_test) = imdb.load_data() # word_index is a dictionary mapping words to an integer index word_index = imdb.get_word_index() # We reverse it, mapping integer indices to words reverse_word_index = dict([(value, key) for (key, value) in word_index.items()]) # We decode the review; note that our indices were offset by 3 # because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown". decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in x_train[0]]) print(decoded_review) boston_housing = keras.datasets.boston_housing (x_train, y_train), (x_test, y_test) = boston_housing.load_data() reuters= keras.datasets.reuters (x_train, y_train), (x_test, y_test) = reuters.load_data() tf.keras.datasets.reuters.get_word_index( path='reuters_word_index.json' )
【文件预览】:
datasets
----cifar-100-python()
--------meta(1KB)
--------file.txt~(0B)
--------train(148.06MB)
--------test(29.61MB)
----imdb.npz(16.66MB)
----fashion-mnist()
--------train-images-idx3-ubyte.gz(25.2MB)
--------train-labels-idx1-ubyte.gz(29KB)
--------t10k-labels-idx1-ubyte.gz(5KB)
--------t10k-images-idx3-ubyte.gz(4.22MB)
----cifar-10-batches-py()
--------data_batch_2(29.6MB)
--------data_batch_1(29.6MB)
--------data_batch_5(29.6MB)
--------data_batch_4(29.6MB)
--------test_batch(29.6MB)
--------data_batch_3(29.6MB)
--------readme.html(88B)
--------batches.meta(158B)
----imdb_word_index.json(1.57MB)
----cifar-10-batches-py.tar.gz(162.6MB)
----reuters.npz(2.01MB)
----cifar-100-python.tar.gz(161.17MB)
----reuters_word_index.json(537KB)
----boston_housing.npz(56KB)
----mnist.npz(10.96MB)