本文实例为大家分享了TensorFlow实现Logistic回归的具体代码,供大家参考,具体内容如下
1.导入模块
1
2
3
4
5
6
7
8
9
10
11
12
|
import numpy as np
import pandas as pd
from pandas import Series,DataFrame
from matplotlib import pyplot as plt
% matplotlib inline
#导入tensorflow
import tensorflow as tf
#导入MNIST(手写数字数据集)
from tensorflow.examples.tutorials.mnist import input_data
|
2.获取训练数据和测试数据
1
2
3
4
5
6
7
8
9
10
|
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
mnist = input_data.read_data_sets( './TensorFlow' ,one_hot = True )
test = mnist.test
test_images = test.images
train = mnist.train
images = train.images
|
3.模拟线性方程
1
2
3
4
5
6
7
8
9
10
11
12
13
|
#创建占矩阵位符X,Y
X = tf.placeholder(tf.float32,shape = [ None , 784 ])
Y = tf.placeholder(tf.float32,shape = [ None , 10 ])
#随机生成斜率W和截距b
W = tf.Variable(tf.zeros([ 784 , 10 ]))
b = tf.Variable(tf.zeros([ 10 ]))
#根据模拟线性方程得出预测值
y_pre = tf.matmul(X,W) + b
#将预测值结果概率化
y_pre_r = tf.nn.softmax(y_pre)
|
4.构造损失函数
1
2
3
|
# -y*tf.log(y_pre_r) --->-Pi*log(Pi) 信息熵公式
cost = tf.reduce_mean( - tf.reduce_sum(Y * tf.log(y_pre_r),axis = 1 ))
|
5.实现梯度下降,获取最小损失函数
1
2
3
|
#learning_rate:学习率,是进行训练时在最陡的梯度方向上所采取的「步」长;
learning_rate = 0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
|
6.TensorFlow初始化,并进行训练
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
|
#定义相关参数
#训练循环次数
training_epochs = 25
#batch 一批,每次训练给算法10个数据
batch_size = 10
#每隔5次,打印输出运算的结果
display_step = 5
#预定义初始化
init = tf.global_variables_initializer()
#开始训练
with tf.Session() as sess:
#初始化
sess.run(init)
#循环训练次数
for epoch in range (training_epochs):
avg_cost = 0.
#总训练批次total_batch =训练总样本量/每批次样本数量
total_batch = int (train.num_examples / batch_size)
for i in range (total_batch):
#每次取出100个数据作为训练数据
batch_xs,batch_ys = mnist.train.next_batch(batch_size)
_, c = sess.run([optimizer,cost],feed_dict = {X:batch_xs,Y:batch_ys})
avg_cost + = c / total_batch
if (epoch + 1 ) % display_step = = 0 :
print (batch_xs.shape,batch_ys.shape)
print ( 'epoch:' , '%04d' % (epoch + 1 ), 'cost=' , '{:.9f}' . format (avg_cost))
print ( 'Optimization Finished!' )
#7.评估效果
# Test model
correct_prediction = tf.equal(tf.argmax(y_pre_r, 1 ),tf.argmax(Y, 1 ))
# Calculate accuracy for 3000 examples
# tf.cast类型转换
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
print ( "Accuracy:" ,accuracy. eval ({X: mnist.test.images[: 3000 ], Y: mnist.test.labels[: 3000 ]}))
|
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持服务器之家。
原文链接:https://blog.csdn.net/weixin_38748717/article/details/78859124