更快地实现pandas应用功能

时间:2020-12-09 21:19:56

I have a pandas dataFrame in which I would like to check if one column is contained in another.

我有一个pandas dataFrame,我想检查一个列是否包含在另一个列中。

Suppose:

df = DataFrame({'A': ['some text here', 'another text', 'and this'], 
                'B': ['some', 'somethin', 'this']})

I would like to check if df.B[0] is in df.A[0], df.B[1] is in df.A[1] etc.

我想检查df.B [0]是否在df.A [0]中,df.B [1]是否在df.A [1]等。

Current approach

I have the following apply function implementation

我有以下apply函数实现

df.apply(lambda x: x[1] in x[0], axis=1)

result is a Series of [True, False, True]

结果是一系列[真,假,真]

which is fine, but for my dataFrame shape (it is in the millions) it takes quite long.
Is there a better (i.e. faster) implamentation?

这很好,但对于我的dataFrame形状(它是数百万)它需要很长时间。是否有更好(即更快)的植入?

Unsuccesfull approach

I tried the pandas.Series.str.contains approach, but it can only take a string for the pattern.

我尝试了pandas.Series.str.contains方法,但它只能为模式采用字符串。

df['A'].str.contains(df['B'], regex=False)

4 个解决方案

#1


6  

Use np.vectorize - bypasses the apply overhead, so should be a bit faster.

使用np.vectorize - 绕过应用开销,所以应该更快一点。

v = np.vectorize(lambda x, y: y in x)

v(df.A, df.B)
array([ True, False,  True], dtype=bool)

Here's a timings comparison -

这是一个时间比较 -

df = pd.concat([df] * 10000)

%timeit df.apply(lambda x: x[1] in x[0], axis=1)
1 loop, best of 3: 1.32 s per loop

%timeit v(df.A, df.B)
100 loops, best of 3: 5.55 ms per loop

# Psidom's answer
%timeit [b in a for a, b in zip(df.A, df.B)]
100 loops, best of 3: 3.34 ms per loop

Both are pretty competitive options!

这两个都是非常有竞争力

Edit, adding timings for Wen's and Max's answers -

编辑,为Wen和Max的答案添加时间 -

# Wen's answer
%timeit df.A.replace(dict(zip(df.B.tolist(),[np.nan]*len(df))),regex=True).isnull()
10 loops, best of 3: 49.1 ms per loop

# MaxU's answer
%timeit df['A'].str.split(expand=True).eq(df['B'], axis=0).any(1)
10 loops, best of 3: 87.8 ms per loop

#2


5  

Try zip, it's significantly faster then apply in this case:

试试zip,它在这种情况下应用得快得多:

df = pd.concat([df] * 10000)
df.head()
#                A         B
#0  some text here      some
#1    another text  somethin
#2        and this      this
#0  some text here      some
#1    another text  somethin

%timeit df.apply(lambda x: x[1] in x[0], axis=1)
# 1 loop, best of 3: 697 ms per loop

%timeit [b in a for a, b in zip(df.A, df.B)]
# 100 loops, best of 3: 3.53 ms per loop

# @coldspeed's np.vectorize solution
%timeit v(df.A, df.B)
# 100 loops, best of 3: 4.18 ms per loop

#3


3  

UPDATE: we can also try to use numba:

更新:我们也可以尝试使用numba:

from numba import jit

@jit
def check_b_in_a(a,b):
    result = np.zeros(len(a)).astype('bool')
    for i in range(len(a)):
        t = b[i] in a[i]
        if t:
            result[i] = t
    return result

In [100]: check_b_in_a(df.A.values, df.B.values)
Out[100]: array([ True, False,  True], dtype=bool)

yet another vectorized solution:

又一个矢量化解决方案:

In [50]: df['A'].str.split(expand=True).eq(df['B'], axis=0).any(1)
Out[50]:
0     True
1    False
2     True
dtype: bool

NOTE: it's much slower compared to Psidom's and COLDSPEED's solutions:

注意:与Psidom和COLDSPEED的解决方案相比,它要慢得多:

In [51]: df = pd.concat([df] * 10000)

# Psidom
In [52]: %timeit [b in a for a, b in zip(df.A, df.B)]
7.45 ms ± 270 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

# cᴏʟᴅsᴘᴇᴇᴅ
In [53]: %timeit v(df.A, df.B)
15.4 ms ± 217 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

# MaxU (1)    
In [54]: %timeit df['A'].str.split(expand=True).eq(df['B'], axis=0).any(1)
185 ms ± 2.29 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

# MaxU (2)    
In [103]: %timeit check_b_in_a(df.A.values, df.B.values)
22.7 ms ± 135 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

# Wen
In [104]: %timeit df.A.replace(dict(zip(df.B.tolist(),[np.nan]*len(df))),regex=True).isnull()
134 ms ± 233 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

#4


3  

Using the replace and nan infection

使用替换和nan感染

df.A.replace(dict(zip(df.B.tolist(),[np.nan]*len(df))),regex=True).isnull()
Out[84]: 
0     True
1    False
2     True
Name: A, dtype: bool

To fix your code

修复你的代码

df['A'].str.contains('|'.join(df.B.tolist()))
Out[91]: 
0     True
1    False
2     True
Name: A, dtype: bool

#1


6  

Use np.vectorize - bypasses the apply overhead, so should be a bit faster.

使用np.vectorize - 绕过应用开销,所以应该更快一点。

v = np.vectorize(lambda x, y: y in x)

v(df.A, df.B)
array([ True, False,  True], dtype=bool)

Here's a timings comparison -

这是一个时间比较 -

df = pd.concat([df] * 10000)

%timeit df.apply(lambda x: x[1] in x[0], axis=1)
1 loop, best of 3: 1.32 s per loop

%timeit v(df.A, df.B)
100 loops, best of 3: 5.55 ms per loop

# Psidom's answer
%timeit [b in a for a, b in zip(df.A, df.B)]
100 loops, best of 3: 3.34 ms per loop

Both are pretty competitive options!

这两个都是非常有竞争力

Edit, adding timings for Wen's and Max's answers -

编辑,为Wen和Max的答案添加时间 -

# Wen's answer
%timeit df.A.replace(dict(zip(df.B.tolist(),[np.nan]*len(df))),regex=True).isnull()
10 loops, best of 3: 49.1 ms per loop

# MaxU's answer
%timeit df['A'].str.split(expand=True).eq(df['B'], axis=0).any(1)
10 loops, best of 3: 87.8 ms per loop

#2


5  

Try zip, it's significantly faster then apply in this case:

试试zip,它在这种情况下应用得快得多:

df = pd.concat([df] * 10000)
df.head()
#                A         B
#0  some text here      some
#1    another text  somethin
#2        and this      this
#0  some text here      some
#1    another text  somethin

%timeit df.apply(lambda x: x[1] in x[0], axis=1)
# 1 loop, best of 3: 697 ms per loop

%timeit [b in a for a, b in zip(df.A, df.B)]
# 100 loops, best of 3: 3.53 ms per loop

# @coldspeed's np.vectorize solution
%timeit v(df.A, df.B)
# 100 loops, best of 3: 4.18 ms per loop

#3


3  

UPDATE: we can also try to use numba:

更新:我们也可以尝试使用numba:

from numba import jit

@jit
def check_b_in_a(a,b):
    result = np.zeros(len(a)).astype('bool')
    for i in range(len(a)):
        t = b[i] in a[i]
        if t:
            result[i] = t
    return result

In [100]: check_b_in_a(df.A.values, df.B.values)
Out[100]: array([ True, False,  True], dtype=bool)

yet another vectorized solution:

又一个矢量化解决方案:

In [50]: df['A'].str.split(expand=True).eq(df['B'], axis=0).any(1)
Out[50]:
0     True
1    False
2     True
dtype: bool

NOTE: it's much slower compared to Psidom's and COLDSPEED's solutions:

注意:与Psidom和COLDSPEED的解决方案相比,它要慢得多:

In [51]: df = pd.concat([df] * 10000)

# Psidom
In [52]: %timeit [b in a for a, b in zip(df.A, df.B)]
7.45 ms ± 270 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

# cᴏʟᴅsᴘᴇᴇᴅ
In [53]: %timeit v(df.A, df.B)
15.4 ms ± 217 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

# MaxU (1)    
In [54]: %timeit df['A'].str.split(expand=True).eq(df['B'], axis=0).any(1)
185 ms ± 2.29 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

# MaxU (2)    
In [103]: %timeit check_b_in_a(df.A.values, df.B.values)
22.7 ms ± 135 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

# Wen
In [104]: %timeit df.A.replace(dict(zip(df.B.tolist(),[np.nan]*len(df))),regex=True).isnull()
134 ms ± 233 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

#4


3  

Using the replace and nan infection

使用替换和nan感染

df.A.replace(dict(zip(df.B.tolist(),[np.nan]*len(df))),regex=True).isnull()
Out[84]: 
0     True
1    False
2     True
Name: A, dtype: bool

To fix your code

修复你的代码

df['A'].str.contains('|'.join(df.B.tolist()))
Out[91]: 
0     True
1    False
2     True
Name: A, dtype: bool