Tensorflow Precision / Recall / F1得分和混淆矩阵

时间:2022-12-07 11:19:55

I would like to know if there is a way to implement the different score function from the scikit learn package like this one :

我想知道是否有办法从scikit学习包中实现不同的得分函数,如下所示:

from sklearn.metrics import confusion_matrix
confusion_matrix(y_true, y_pred)

into a tensorflow model to get the different score.

进入张量流模型以获得不同的分数。

with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:
init = tf.initialize_all_variables()
sess.run(init)
for epoch in xrange(1):
        avg_cost = 0.
        total_batch = len(train_arrays) / batch_size
        for batch in range(total_batch):
                train_step.run(feed_dict = {x: train_arrays, y: train_labels})
                avg_cost += sess.run(cost, feed_dict={x: train_arrays, y: train_labels})/total_batch
        if epoch % display_step == 0:
                print "Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost)

print "Optimization Finished!"
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print "Accuracy:", batch, accuracy.eval({x: test_arrays, y: test_labels})

Will i have to run the session again to get the prediction ?

我是否必须再次运行会话以获得预测?

5 个解决方案

#1


35  

You do not really need sklearn to calculate precision/recall/f1 score. You can easily express them in TF-ish way by looking at the formulas

你真的不需要sklearn来计算精确度/召回率/ f1分数。您可以通过查看公式以TF-ish方式轻松表达它们

Tensorflow Precision / Recall / F1得分和混淆矩阵

Now if you have your actual and predicted values as vectors of 0/1, you can calculate TP, TN, FP, FN using tf.count_nonzero:

现在,如果您将实际值和预测值作为0/1的向量,则可以使用tf.count_nonzero计算TP,TN,FP,FN:

TP = tf.count_nonzero(predicted * actual)
TN = tf.count_nonzero((predicted - 1) * (actual - 1))
FP = tf.count_nonzero(predicted * (actual - 1))
FN = tf.count_nonzero((predicted - 1) * actual)

Now your metrics is easy to caclulate:

现在您的指标很容易解决:

precision = TP / (TP + FP)
recall = TP / (TP + FN)
f1 = 2 * precision * recall / (precision + recall)

#2


27  

Maybe this example will speak to you :

也许这个例子会告诉你:

    pred = multilayer_perceptron(x, weights, biases)
    correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

    with tf.Session() as sess:
    init = tf.initialize_all_variables()
    sess.run(init)
    for epoch in xrange(150):
            for i in xrange(total_batch):
                    train_step.run(feed_dict = {x: train_arrays, y: train_labels})
                    avg_cost += sess.run(cost, feed_dict={x: train_arrays, y: train_labels})/total_batch         
            if epoch % display_step == 0:
                    print "Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost)

    #metrics
    y_p = tf.argmax(pred, 1)
    val_accuracy, y_pred = sess.run([accuracy, y_p], feed_dict={x:test_arrays, y:test_label})

    print "validation accuracy:", val_accuracy
    y_true = np.argmax(test_label,1)
    print "Precision", sk.metrics.precision_score(y_true, y_pred)
    print "Recall", sk.metrics.recall_score(y_true, y_pred)
    print "f1_score", sk.metrics.f1_score(y_true, y_pred)
    print "confusion_matrix"
    print sk.metrics.confusion_matrix(y_true, y_pred)
    fpr, tpr, tresholds = sk.metrics.roc_curve(y_true, y_pred)

#3


3  

Since i have not enough reputation to add a comment to Salvador Dalis answer this is the way to go:

由于我没有足够的声誉来向Salvador Dalis发表评论回答这是要走的路:

tf.count_nonzero casts your values into an tf.int64 unless specified otherwise. Using:

除非另有说明,否则tf.count_nonzero会将您的值转换为tf.int64。使用:

argmax_prediction = tf.argmax(prediction, 1)
argmax_y = tf.argmax(y, 1)

TP = tf.count_nonzero(argmax_prediction * argmax_y, dtype=tf.float32)
TN = tf.count_nonzero((argmax_prediction - 1) * (argmax_y - 1), dtype=tf.float32)
FP = tf.count_nonzero(argmax_prediction * (argmax_y - 1), dtype=tf.float32)
FN = tf.count_nonzero((argmax_prediction - 1) * argmax_y, dtype=tf.float32)

is a realy good idea.

这是个好主意。

#4


1  

Use the metrics APIs provided in tf.contrib.metrics, for example:

使用tf.contrib.metrics中提供的指标API,例如:

labels = ...
predictions = ...

accuracy, update_op_acc = tf.contrib.metrics.streaming_accuracy(labels, predictions)
error, update_op_error = tf.contrib.metrics.streaming_mean_absolute_error(labels, predictions)

sess.run(tf.local_variables_initializer())
for batch in range(num_batches):
  sess.run([update_op_acc, update_op_error])
accuracy, mean_absolute_error = sess.run([accuracy, mean_absolute_error])

#5


1  

Multi-label case

Previous answers do not specify how to handle the multi-label case so here is such a version implementing three types of multi-label f1 score in tensorflow: micro, macro and weighted (as per scikit-learn)

以前的答案没有指定如何处理多标签案例,所以这里是这样一个版本在tensorflow中实现三种类型的多标签f1分数:微观,宏观和加权(根据scikit-learn)

Update (06/06/18): I wrote a blog post about how to compute the streaming multilabel f1 score in case it helps anyone (it's a longer process, don't want to overload this answer)

更新(06/06/18):我写了一篇关于如何计算流媒体多标签f1分数的博客文章,以防它帮助任何人(这是一个更长的过程,不想超载这个答案)

f1s = [0, 0, 0]

y_true = tf.cast(y_true, tf.float64)
y_pred = tf.cast(y_pred, tf.float64)

for i, axis in enumerate([None, 0]):
    TP = tf.count_nonzero(y_pred * y_true, axis=axis)
    FP = tf.count_nonzero(y_pred * (y_true - 1), axis=axis)
    FN = tf.count_nonzero((y_pred - 1) * y_true, axis=axis)

    precision = TP / (TP + FP)
    recall = TP / (TP + FN)
    f1 = 2 * precision * recall / (precision + recall)

    f1s[i] = tf.reduce_mean(f1)

weights = tf.reduce_sum(y_true, axis=0)
weights /= tf.reduce_sum(weights)

f1s[2] = tf.reduce_sum(f1 * weights)

micro, macro, weighted = f1s

Correctness

def tf_f1_score(y_true, y_pred):
    """Computes 3 different f1 scores, micro macro
    weighted.
    micro: f1 score accross the classes, as 1
    macro: mean of f1 scores per class
    weighted: weighted average of f1 scores per class,
            weighted from the support of each class


    Args:
        y_true (Tensor): labels, with shape (batch, num_classes)
        y_pred (Tensor): model's predictions, same shape as y_true

    Returns:
        tuple(Tensor): (micro, macro, weighted)
                    tuple of the computed f1 scores
    """

    f1s = [0, 0, 0]

    y_true = tf.cast(y_true, tf.float64)
    y_pred = tf.cast(y_pred, tf.float64)

    for i, axis in enumerate([None, 0]):
        TP = tf.count_nonzero(y_pred * y_true, axis=axis)
        FP = tf.count_nonzero(y_pred * (y_true - 1), axis=axis)
        FN = tf.count_nonzero((y_pred - 1) * y_true, axis=axis)

        precision = TP / (TP + FP)
        recall = TP / (TP + FN)
        f1 = 2 * precision * recall / (precision + recall)

        f1s[i] = tf.reduce_mean(f1)

    weights = tf.reduce_sum(y_true, axis=0)
    weights /= tf.reduce_sum(weights)

    f1s[2] = tf.reduce_sum(f1 * weights)

    micro, macro, weighted = f1s
    return micro, macro, weighted


def compare(nb, dims):
    labels = (np.random.randn(nb, dims) > 0.5).astype(int)
    predictions = (np.random.randn(nb, dims) > 0.5).astype(int)

    stime = time()
    mic = f1_score(labels, predictions, average='micro')
    mac = f1_score(labels, predictions, average='macro')
    wei = f1_score(labels, predictions, average='weighted')

    print('sklearn in {:.4f}:\n    micro: {:.8f}\n    macro: {:.8f}\n    weighted: {:.8f}'.format(
        time() - stime, mic, mac, wei
    ))

    gtime = time()
    tf.reset_default_graph()
    y_true = tf.Variable(labels)
    y_pred = tf.Variable(predictions)
    micro, macro, weighted = tf_f1_score(y_true, y_pred)
    with tf.Session() as sess:
        tf.global_variables_initializer().run(session=sess)
        stime = time()
        mic, mac, wei = sess.run([micro, macro, weighted])
        print('tensorflow in {:.4f} ({:.4f} with graph time):\n    micro: {:.8f}\n    macro: {:.8f}\n    weighted: {:.8f}'.format(
            time() - stime, time()-gtime,  mic, mac, wei
        ))

compare(10 ** 6, 10)

outputs:

>> rows: 10^6 dimensions: 10
sklearn in 2.3939:
    micro: 0.30890287
    macro: 0.30890275
    weighted: 0.30890279
tensorflow in 0.2465 (3.3246 with graph time):
    micro: 0.30890287
    macro: 0.30890275
    weighted: 0.30890279

#1


35  

You do not really need sklearn to calculate precision/recall/f1 score. You can easily express them in TF-ish way by looking at the formulas

你真的不需要sklearn来计算精确度/召回率/ f1分数。您可以通过查看公式以TF-ish方式轻松表达它们

Tensorflow Precision / Recall / F1得分和混淆矩阵

Now if you have your actual and predicted values as vectors of 0/1, you can calculate TP, TN, FP, FN using tf.count_nonzero:

现在,如果您将实际值和预测值作为0/1的向量,则可以使用tf.count_nonzero计算TP,TN,FP,FN:

TP = tf.count_nonzero(predicted * actual)
TN = tf.count_nonzero((predicted - 1) * (actual - 1))
FP = tf.count_nonzero(predicted * (actual - 1))
FN = tf.count_nonzero((predicted - 1) * actual)

Now your metrics is easy to caclulate:

现在您的指标很容易解决:

precision = TP / (TP + FP)
recall = TP / (TP + FN)
f1 = 2 * precision * recall / (precision + recall)

#2


27  

Maybe this example will speak to you :

也许这个例子会告诉你:

    pred = multilayer_perceptron(x, weights, biases)
    correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

    with tf.Session() as sess:
    init = tf.initialize_all_variables()
    sess.run(init)
    for epoch in xrange(150):
            for i in xrange(total_batch):
                    train_step.run(feed_dict = {x: train_arrays, y: train_labels})
                    avg_cost += sess.run(cost, feed_dict={x: train_arrays, y: train_labels})/total_batch         
            if epoch % display_step == 0:
                    print "Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost)

    #metrics
    y_p = tf.argmax(pred, 1)
    val_accuracy, y_pred = sess.run([accuracy, y_p], feed_dict={x:test_arrays, y:test_label})

    print "validation accuracy:", val_accuracy
    y_true = np.argmax(test_label,1)
    print "Precision", sk.metrics.precision_score(y_true, y_pred)
    print "Recall", sk.metrics.recall_score(y_true, y_pred)
    print "f1_score", sk.metrics.f1_score(y_true, y_pred)
    print "confusion_matrix"
    print sk.metrics.confusion_matrix(y_true, y_pred)
    fpr, tpr, tresholds = sk.metrics.roc_curve(y_true, y_pred)

#3


3  

Since i have not enough reputation to add a comment to Salvador Dalis answer this is the way to go:

由于我没有足够的声誉来向Salvador Dalis发表评论回答这是要走的路:

tf.count_nonzero casts your values into an tf.int64 unless specified otherwise. Using:

除非另有说明,否则tf.count_nonzero会将您的值转换为tf.int64。使用:

argmax_prediction = tf.argmax(prediction, 1)
argmax_y = tf.argmax(y, 1)

TP = tf.count_nonzero(argmax_prediction * argmax_y, dtype=tf.float32)
TN = tf.count_nonzero((argmax_prediction - 1) * (argmax_y - 1), dtype=tf.float32)
FP = tf.count_nonzero(argmax_prediction * (argmax_y - 1), dtype=tf.float32)
FN = tf.count_nonzero((argmax_prediction - 1) * argmax_y, dtype=tf.float32)

is a realy good idea.

这是个好主意。

#4


1  

Use the metrics APIs provided in tf.contrib.metrics, for example:

使用tf.contrib.metrics中提供的指标API,例如:

labels = ...
predictions = ...

accuracy, update_op_acc = tf.contrib.metrics.streaming_accuracy(labels, predictions)
error, update_op_error = tf.contrib.metrics.streaming_mean_absolute_error(labels, predictions)

sess.run(tf.local_variables_initializer())
for batch in range(num_batches):
  sess.run([update_op_acc, update_op_error])
accuracy, mean_absolute_error = sess.run([accuracy, mean_absolute_error])

#5


1  

Multi-label case

Previous answers do not specify how to handle the multi-label case so here is such a version implementing three types of multi-label f1 score in tensorflow: micro, macro and weighted (as per scikit-learn)

以前的答案没有指定如何处理多标签案例,所以这里是这样一个版本在tensorflow中实现三种类型的多标签f1分数:微观,宏观和加权(根据scikit-learn)

Update (06/06/18): I wrote a blog post about how to compute the streaming multilabel f1 score in case it helps anyone (it's a longer process, don't want to overload this answer)

更新(06/06/18):我写了一篇关于如何计算流媒体多标签f1分数的博客文章,以防它帮助任何人(这是一个更长的过程,不想超载这个答案)

f1s = [0, 0, 0]

y_true = tf.cast(y_true, tf.float64)
y_pred = tf.cast(y_pred, tf.float64)

for i, axis in enumerate([None, 0]):
    TP = tf.count_nonzero(y_pred * y_true, axis=axis)
    FP = tf.count_nonzero(y_pred * (y_true - 1), axis=axis)
    FN = tf.count_nonzero((y_pred - 1) * y_true, axis=axis)

    precision = TP / (TP + FP)
    recall = TP / (TP + FN)
    f1 = 2 * precision * recall / (precision + recall)

    f1s[i] = tf.reduce_mean(f1)

weights = tf.reduce_sum(y_true, axis=0)
weights /= tf.reduce_sum(weights)

f1s[2] = tf.reduce_sum(f1 * weights)

micro, macro, weighted = f1s

Correctness

def tf_f1_score(y_true, y_pred):
    """Computes 3 different f1 scores, micro macro
    weighted.
    micro: f1 score accross the classes, as 1
    macro: mean of f1 scores per class
    weighted: weighted average of f1 scores per class,
            weighted from the support of each class


    Args:
        y_true (Tensor): labels, with shape (batch, num_classes)
        y_pred (Tensor): model's predictions, same shape as y_true

    Returns:
        tuple(Tensor): (micro, macro, weighted)
                    tuple of the computed f1 scores
    """

    f1s = [0, 0, 0]

    y_true = tf.cast(y_true, tf.float64)
    y_pred = tf.cast(y_pred, tf.float64)

    for i, axis in enumerate([None, 0]):
        TP = tf.count_nonzero(y_pred * y_true, axis=axis)
        FP = tf.count_nonzero(y_pred * (y_true - 1), axis=axis)
        FN = tf.count_nonzero((y_pred - 1) * y_true, axis=axis)

        precision = TP / (TP + FP)
        recall = TP / (TP + FN)
        f1 = 2 * precision * recall / (precision + recall)

        f1s[i] = tf.reduce_mean(f1)

    weights = tf.reduce_sum(y_true, axis=0)
    weights /= tf.reduce_sum(weights)

    f1s[2] = tf.reduce_sum(f1 * weights)

    micro, macro, weighted = f1s
    return micro, macro, weighted


def compare(nb, dims):
    labels = (np.random.randn(nb, dims) > 0.5).astype(int)
    predictions = (np.random.randn(nb, dims) > 0.5).astype(int)

    stime = time()
    mic = f1_score(labels, predictions, average='micro')
    mac = f1_score(labels, predictions, average='macro')
    wei = f1_score(labels, predictions, average='weighted')

    print('sklearn in {:.4f}:\n    micro: {:.8f}\n    macro: {:.8f}\n    weighted: {:.8f}'.format(
        time() - stime, mic, mac, wei
    ))

    gtime = time()
    tf.reset_default_graph()
    y_true = tf.Variable(labels)
    y_pred = tf.Variable(predictions)
    micro, macro, weighted = tf_f1_score(y_true, y_pred)
    with tf.Session() as sess:
        tf.global_variables_initializer().run(session=sess)
        stime = time()
        mic, mac, wei = sess.run([micro, macro, weighted])
        print('tensorflow in {:.4f} ({:.4f} with graph time):\n    micro: {:.8f}\n    macro: {:.8f}\n    weighted: {:.8f}'.format(
            time() - stime, time()-gtime,  mic, mac, wei
        ))

compare(10 ** 6, 10)

outputs:

>> rows: 10^6 dimensions: 10
sklearn in 2.3939:
    micro: 0.30890287
    macro: 0.30890275
    weighted: 0.30890279
tensorflow in 0.2465 (3.3246 with graph time):
    micro: 0.30890287
    macro: 0.30890275
    weighted: 0.30890279