I have a data frame with 2 million rows, and 15 columns. I want to group by 3 of these columns with ddply (all 3 are factors, and there are 780,000 unique combinations of these factors), and get the weighted mean of 3 columns (with weights defined by my data set). The following is reasonably quick:
我有一个包含200万行和15列的数据框。我想用ddply对这些列中的3个进行分组(所有3个是因子,并且这些因子有780,000个唯一组合),并获得3列的加权平均值(权重由我的数据集定义)。以下是相当快的:
system.time(a2 <- aggregate(cbind(col1,col2,col3) ~ fac1 + fac2 + fac3, data=aggdf, FUN=mean))
user system elapsed
91.358 4.747 115.727
The problem is that I want to use weighted.mean instead of mean to calculate my aggregate columns.
问题是我想使用weighted.mean而不是mean来计算我的聚合列。
If I try the following ddply on the same data frame (note, I cast to immutable), the following does not finish after 20 minutes:
如果我在同一个数据框上尝试以下ddply(注意,我转换为不可变),以下内容在20分钟后没有完成:
x <- ddply(idata.frame(aggdf),
c("fac1","fac2","fac3"),
summarise,
w=sum(w),
col1=weighted.mean(col1, w),
col2=weighted.mean(col2, w),
col3=weighted.mean(col3, w))
This operation seems to be CPU hungry, but not very RAM-intensive.
这个操作似乎是CPU饥饿,但不是很密集的RAM。
EDIT: So I ended up writing this little function, which "cheats" a bit by taking advantage of some properties of weighted mean and does a multiplication and a division on the whole object, rather than on the slices.
编辑:所以我最后编写了这个小函数,它通过利用加权平均值的某些属性来“欺骗”一些,并对整个对象进行乘法和除法,而不是在切片上。
weighted_mean_cols <- function(df, bycols, aggcols, weightcol) {
df[,aggcols] <- df[,aggcols]*df[,weightcol]
df <- aggregate(df[,c(weightcol, aggcols)], by=as.list(df[,bycols]), sum)
df[,aggcols] <- df[,aggcols]/df[,weightcol]
df
}
When I run as:
当我跑:
a2 <- weighted_mean_cols(aggdf, c("fac1","fac2","fac3"), c("col1","col2","col3"),"w")
I get good performance, and somewhat reusable, elegant code.
我获得了良好的性能,并且有些可重用,优雅的代码。
2 个解决方案
#1
2
If you're going to use your edit, why not use rowsum
and save yourself a few minutes of execution time?
如果您要使用编辑,为什么不使用rowsum并节省几分钟的执行时间?
nr <- 2e6
nc <- 3
aggdf <- data.frame(matrix(rnorm(nr*nc),nr,nc),
matrix(sample(100,nr*nc,TRUE),nr,nc), rnorm(nr))
colnames(aggdf) <- c("col1","col2","col3","fac1","fac2","fac3","w")
system.time({
aggsums <- rowsum(data.frame(aggdf[,c("col1","col2","col3")]*aggdf$w,w=aggdf$w),
interaction(aggdf[,c("fac1","fac2","fac3")]))
agg_wtd_mean <- aggsums[,1:3]/aggsums[,4]
})
# user system elapsed
# 16.21 0.77 16.99
#2
5
Though ddply
is hard to beat for elegance and ease of code, I find that for big data, tapply
is much faster. In your case, I would use a
虽然ddply很难被优雅和易于编码所击败,但我发现对于大数据来说,tapply要快得多。在你的情况下,我会使用
do.call("cbind", list((w <- tapply(..)), tapply(..)))
Sorry for the dots and possibly faulty understanding of the question; but I am in a bit of a rush and must catch a bus in about minus five minutes!
对于问题的点和可能的错误理解感到抱歉;但是我有点匆忙,必须在大约五分钟内赶上一辆公共汽车!
#1
2
If you're going to use your edit, why not use rowsum
and save yourself a few minutes of execution time?
如果您要使用编辑,为什么不使用rowsum并节省几分钟的执行时间?
nr <- 2e6
nc <- 3
aggdf <- data.frame(matrix(rnorm(nr*nc),nr,nc),
matrix(sample(100,nr*nc,TRUE),nr,nc), rnorm(nr))
colnames(aggdf) <- c("col1","col2","col3","fac1","fac2","fac3","w")
system.time({
aggsums <- rowsum(data.frame(aggdf[,c("col1","col2","col3")]*aggdf$w,w=aggdf$w),
interaction(aggdf[,c("fac1","fac2","fac3")]))
agg_wtd_mean <- aggsums[,1:3]/aggsums[,4]
})
# user system elapsed
# 16.21 0.77 16.99
#2
5
Though ddply
is hard to beat for elegance and ease of code, I find that for big data, tapply
is much faster. In your case, I would use a
虽然ddply很难被优雅和易于编码所击败,但我发现对于大数据来说,tapply要快得多。在你的情况下,我会使用
do.call("cbind", list((w <- tapply(..)), tapply(..)))
Sorry for the dots and possibly faulty understanding of the question; but I am in a bit of a rush and must catch a bus in about minus five minutes!
对于问题的点和可能的错误理解感到抱歉;但是我有点匆忙,必须在大约五分钟内赶上一辆公共汽车!