I have updated my question to provide a clearer example.
我已经更新了我的问题以提供一个更清晰的例子。
Is it possible to use the drop_duplicates method in Pandas to remove duplicate rows based on a column id where the values contain a list. Consider column 'three' which consists of two items in a list. Is there a way to drop the duplicate rows rather than doing it iteratively (which is my current workaround).
是否可以在熊猫中使用drop_duplicate方法根据列id删除重复的行,其中的值包含列表。考虑列“3”,它包含列表中的两个项目。是否有一种方法可以删除重复的行,而不是迭代执行(这是我当前的解决方案)。
I have outlined my problem by providing the following example:
我通过提供以下示例概述了我的问题:
import pandas as pd
data = [
{'one': 50, 'two': '5:00', 'three': 'february'},
{'one': 25, 'two': '6:00', 'three': ['february', 'january']},
{'one': 25, 'two': '6:00', 'three': ['february', 'january']},
{'one': 25, 'two': '6:00', 'three': ['february', 'january']},
{'one': 90, 'two': '9:00', 'three': 'january'}
]
df = pd.DataFrame(data)
print(df)
one three two
0 50 february 5:00
1 25 [february, january] 6:00
2 25 [february, january] 6:00
3 25 [february, january] 6:00
4 90 january 9:00
df.drop_duplicates(['three'])
Results in the following error:
导致以下错误:
TypeError: type object argument after * must be a sequence, not map
1 个解决方案
#1
17
I think it's because the list type isn't hashable and that's messing up the duplicated logic. As a workaround you could cast to tuple like so:
我认为这是因为列表类型是不可洗的,这会打乱重复的逻辑。作为一种变通方法,您可以对tuple进行如下转换:
df['four'] = df['three'].apply(lambda x : tuple(x) if type(x) is list else x)
df.drop_duplicates('four')
one three two four
0 50 february 5:00 february
1 25 [february, january] 6:00 (february, january)
4 90 january 9:00 january
#1
17
I think it's because the list type isn't hashable and that's messing up the duplicated logic. As a workaround you could cast to tuple like so:
我认为这是因为列表类型是不可洗的,这会打乱重复的逻辑。作为一种变通方法,您可以对tuple进行如下转换:
df['four'] = df['three'].apply(lambda x : tuple(x) if type(x) is list else x)
df.drop_duplicates('four')
one three two four
0 50 february 5:00 february
1 25 [february, january] 6:00 (february, january)
4 90 january 9:00 january