I've got a dataset with a big number of rows. Some of the values are NaN, like this:
我有一个包含大量行的数据集。一些值是NaN,如下所示:
In [91]: df
Out[91]:
1 3 1 1 1
1 3 1 1 1
2 3 1 1 1
1 1 NaN NaN NaN
1 3 1 1 1
1 1 1 1 1
And I want to count the number of NaN values in each string, it would be like this:
我想计算每个字符串中的NaN值的数量,它将是这样的:
In [91]: list = <somecode with df>
In [92]: list
Out[91]:
[0,
0,
0,
3,
0,
0]
What is the best and fastest way to do it?
最好和最快的方法是什么?
1 个解决方案
#1
You could first find if element is NaN
or not by isnull()
and then take row-wise sum(axis=1)
您可以首先通过isnull()找到元素是否为NaN,然后采用行方式求和(axis = 1)
In [195]: df.isnull().sum(axis=1)
Out[195]:
0 0
1 0
2 0
3 3
4 0
5 0
dtype: int64
And, if you want the output as list, you can
而且,如果您希望输出为列表,则可以
In [196]: df.isnull().sum(axis=1).tolist()
Out[196]: [0, 0, 0, 3, 0, 0]
Or use count
like
或者使用计数
In [130]: df.shape[1] - df.count(axis=1)
Out[130]:
0 0
1 0
2 0
3 3
4 0
5 0
dtype: int64
#1
You could first find if element is NaN
or not by isnull()
and then take row-wise sum(axis=1)
您可以首先通过isnull()找到元素是否为NaN,然后采用行方式求和(axis = 1)
In [195]: df.isnull().sum(axis=1)
Out[195]:
0 0
1 0
2 0
3 3
4 0
5 0
dtype: int64
And, if you want the output as list, you can
而且,如果您希望输出为列表,则可以
In [196]: df.isnull().sum(axis=1).tolist()
Out[196]: [0, 0, 0, 3, 0, 0]
Or use count
like
或者使用计数
In [130]: df.shape[1] - df.count(axis=1)
Out[130]:
0 0
1 0
2 0
3 3
4 0
5 0
dtype: int64