Essentially, what I am trying to do is join Table_A to Table_B using a key to do a lookup in Table_B to pull column records for names present in Table_A.
本质上,我要做的是使用一个键将Table_A连接到Table_B,以在Table_B中执行查找以获取Table_A中存在的名称的列记录。
Table_B can be thought of as the master name table that stores various attributes about a name. Table_A represents incoming data with information about a name.
可以将Table_B视为存储有关名称的各种属性的主名称表。 Table_A表示包含名称信息的传入数据。
There are two columns that represent a name - a column named 'raw_name' and a column named 'real_name'. The 'raw_name' has the string "code_" before the real_name.
有两列代表名称 - 名为“raw_name”的列和名为“real_name”的列。 'raw_name'在real_name之前有字符串“code_”。
i.e.
raw_name = CE993_VincentHanna
real_name = VincentHanna
Key = real_name, which exists in Table_A and Table_B
Key = real_name,存在于Table_A和Table_B中
Please see the mySQL tables and query here: http://sqlfiddle.com/#!9/65e13/1
请参阅mySQL表并在此查询:http://sqlfiddle.com/#!9/65e13/1
For all real_names in Table_A that DO-NOT exist in Table_B I want to store raw_name/real_name pairs into an object so I can send an alert to the data-entry staff for manual insertion.
对于表_A中表示不存在的所有real_names,我想将raw_name / real_name对存储到对象中,以便我可以向数据输入人员发送警报以进行手动插入。
For all real_names in Table_A that DO exist in Table_B, which means we know about this name and can add the new raw_name associated with this real_name into our master Table_B
对于Table_B中存在于Table_B中的所有real_names,这意味着我们知道此名称,并且可以将与此real_name关联的新raw_name添加到我们的主Table_B中
In mySQL, this is easy to do as you can see in my sqlfidde example. I join on real_name and I compress/collapse the result by groupby a.real_name since I don't care if there are multiple records in Table_B for the same real_name.
在mySQL中,这很容易,就像我在sqlfidde示例中看到的那样。我加入了real_name,我通过groupby a.real_name压缩/折叠结果,因为我不关心Table_B中是否存在同一个real_name的多个记录。
All I want is to pull the attributes (stats1, stats2, stats3) so I can assign them to the newly discovered raw_name.
我想要的只是提取属性(stats1,stats2,stats3),以便我可以将它们分配给新发现的raw_name。
In the mySQL query result I can then separate the NULL records to be sent for manual data-entry and automatically insert the remaining records into Table_B.
在mySQL查询结果中,我可以将要发送的NULL记录分开以进行手动数据输入,并自动将其余记录插入Table_B。
Now, I am trying to do the same in Pandas but am stuck at the point of groupby on real-name.
现在,我正在尝试在熊猫中做同样的事情,但我被困在了实名的groupby点。
e = {'raw_name': pd.Series(['AW103_Waingro', 'CE993_VincentHanna', 'EES43_NeilMcCauley', 'SME16_ChrisShiherlis',
'MEC14_MichaelCheritto', 'OTP23_RogerVanZant', 'MDU232_AlanMarciano']),
'real_name': pd.Series(['Waingro', 'VincentHanna', 'NeilMcCauley', 'ChrisShiherlis', 'MichaelCheritto',
'RogerVanZant', 'AlanMarciano'])}
f = {'raw_name': pd.Series(['SME893_VincentHanna', 'TVA405_VincentHanna', 'MET783_NeilMcCauley',
'CE321_NeilMcCauley', 'CIN453_NeilMcCauley', 'NIPS16_ChrisShiherlis',
'ALTW12_MichaelCheritto', 'NSP42_MichaelCheritto', 'CONS23_RogerVanZant',
'WAUE34_RogerVanZant']),
'real_name': pd.Series(['VincentHanna', 'VincentHanna', 'NeilMcCauley', 'NeilMcCauley', 'NeilMcCauley',
'ChrisShiherlis', 'MichaelCheritto', 'MichaelCheritto', 'RogerVanZant',
'RogerVanZant']),
'stats1': pd.Series(['meh1', 'meh1', 'yo1', 'yo1', 'yo1', 'hello1', 'bye1', 'bye1', 'namaste1',
'namaste1']),
'stats2': pd.Series(['meh2', 'meh2', 'yo2', 'yo2', 'yo2', 'hello2', 'bye2', 'bye2', 'namaste2',
'namaste2']),
'stats3': pd.Series(['meh3', 'meh3', 'yo3', 'yo3', 'yo3', 'hello3', 'bye3', 'bye3', 'namaste3',
'namaste3'])}
df_e = pd.DataFrame(e)
df_f = pd.DataFrame(f)
df_new = pd.merge(df_e, df_f, how='left', on='real_name', suffixes=['_left', '_right'])
df_new_grouped = df_new.groupby(df_new['raw_name_left'])
Now how do I compress/collapse the groups in df_new_grouped on real-name like I did in mySQL.
现在我如何像在mySQL中那样在实名上压缩/折叠df_new_grouped中的组。
Once I have an object with the collapsed results I can slice the dataframe to report real_names we don't have a record of (NULL values) and those that we already know and can store the newly discovered raw_name.
一旦我有一个折叠结果的对象,我可以切割数据帧以报告real_names,我们没有(NULL值)记录和我们已经知道的那些并且可以存储新发现的raw_name。
2 个解决方案
#1
You can drop duplicates based on columns raw_name_left
and also remove the raw_name_right
column using drop
您可以根据raw_name_left列删除重复项,也可以使用drop删除raw_name_right列
In [99]: df_new.drop_duplicates('raw_name_left').drop('raw_name_right', 1)
Out[99]:
raw_name_left real_name stats1 stats2 stats3
0 AW103_Waingro Waingro NaN NaN NaN
1 CE993_VincentHanna VincentHanna meh1 meh2 meh3
3 EES43_NeilMcCauley NeilMcCauley yo1 yo2 yo3
6 SME16_ChrisShiherlis ChrisShiherlis hello1 hello2 hello3
7 MEC14_MichaelCheritto MichaelCheritto bye1 bye2 bye3
9 OTP23_RogerVanZant RogerVanZant namaste1 namaste2 namaste3
11 MDU232_AlanMarciano AlanMarciano NaN NaN NaN
#2
Just to be thorough, this can also be done using Groupby, which I found on Wes McKinney's blog although drop_duplicates is cleaner and more efficient.
为了彻底,这也可以使用Groupby完成,我在Wes McKinney的博客上找到了虽然drop_duplicates更清洁,更有效。
http://wesmckinney.com/blog/filtering-out-duplicate-dataframe-rows/
>index = [gp_keys[0] for gp_keys in df_new_grouped.groups.values()]
>unique_df = df_new.reindex(index)
>unique_df
#1
You can drop duplicates based on columns raw_name_left
and also remove the raw_name_right
column using drop
您可以根据raw_name_left列删除重复项,也可以使用drop删除raw_name_right列
In [99]: df_new.drop_duplicates('raw_name_left').drop('raw_name_right', 1)
Out[99]:
raw_name_left real_name stats1 stats2 stats3
0 AW103_Waingro Waingro NaN NaN NaN
1 CE993_VincentHanna VincentHanna meh1 meh2 meh3
3 EES43_NeilMcCauley NeilMcCauley yo1 yo2 yo3
6 SME16_ChrisShiherlis ChrisShiherlis hello1 hello2 hello3
7 MEC14_MichaelCheritto MichaelCheritto bye1 bye2 bye3
9 OTP23_RogerVanZant RogerVanZant namaste1 namaste2 namaste3
11 MDU232_AlanMarciano AlanMarciano NaN NaN NaN
#2
Just to be thorough, this can also be done using Groupby, which I found on Wes McKinney's blog although drop_duplicates is cleaner and more efficient.
为了彻底,这也可以使用Groupby完成,我在Wes McKinney的博客上找到了虽然drop_duplicates更清洁,更有效。
http://wesmckinney.com/blog/filtering-out-duplicate-dataframe-rows/
>index = [gp_keys[0] for gp_keys in df_new_grouped.groups.values()]
>unique_df = df_new.reindex(index)
>unique_df