如何在PySpark中使用窗口函数?

时间:2021-10-22 22:59:29

I'm trying to use some windows functions (ntile and percentRank) for a data frame but I don't know how to use them.

我正在尝试使用一些Windows函数(ntile和percentRank)作为数据框,但我不知道如何使用它们。

Does anyone can help me with this please? in the Python API documentation there are no examples about it. ( http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=ntile#pyspark.sql.functions.ntile )

有人可以帮我这个吗?在Python API文档中没有关于它的示例。 (http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=ntile#pyspark.sql.functions.ntile)

In specific, I'm trying to get quantiles of a numeric field in my data frame.

具体来说,我试图在我的数据框中获得数字字段的分位数。

I'm using spark 1.4.0.

我正在使用spark 1.4.0。

1 个解决方案

#1


17  

To be able to use window function you have to create a window first. Definition is pretty much the same as for normal SQL it means you can define either order, partition or both. First lets create some dummy data:

为了能够使用窗口功能,您必须首先创建一个窗口。定义与普通SQL几乎相同,这意味着您可以定义顺序,分区或两者。首先让我们创建一些虚拟数据:

import numpy as np
np.random.seed(1)

keys = ["foo"] * 10 + ["bar"] * 10
values = np.hstack([np.random.normal(0, 1, 10), np.random.normal(10, 1, 100)])

df = sqlContext.createDataFrame([
   {"k": k, "v": round(float(v), 3)} for k, v in zip(keys, values)])

Make sure you're using HiveContext (Spark < 2.0 only):

确保您使用的是HiveContext(仅限Spark <2.0):

from pyspark.sql import HiveContext

assert isinstance(sqlContext, HiveContext)

Create a window:

创建一个窗口:

from pyspark.sql.window import Window

w =  Window.partitionBy(df.k).orderBy(df.v)

which is equivalent to

这相当于

(PARTITION BY k ORDER BY v) 

in SQL.

在SQL中。

As a rule of thumb window definitions should always contain PARTITION BY clause otherwise Spark will move all data to a single partition. ORDER BY is required for some functions, while in different cases (typically aggregates) may be optional.

根据经验,窗口定义应始终包含PARTITION BY子句,否则Spark会将所有数据移动到单个分区。某些功能需要ORDER BY,而在不同情况下(通常为聚合)可能是可选的。

There are also two optional which can be used to define window span - ROWS BETWEEN and RANGE BETWEEN. These won't be useful for us in this particular scenario.

还有两个可选项,可用于定义窗口跨度 - ROWS BETWEEN和RANGE BETWEEN。在这种特定情况下,这些对我们没用。

Finally we can use it for a query:

最后,我们可以将它用于查询:

from pyspark.sql.functions import percentRank, ntile

df.select(
    "k", "v",
    percentRank().over(w).alias("percent_rank"),
    ntile(3).over(w).alias("ntile3")
)

Note that ntile is not related in any way to the quantiles.

请注意,ntile与分位数无关。

#1


17  

To be able to use window function you have to create a window first. Definition is pretty much the same as for normal SQL it means you can define either order, partition or both. First lets create some dummy data:

为了能够使用窗口功能,您必须首先创建一个窗口。定义与普通SQL几乎相同,这意味着您可以定义顺序,分区或两者。首先让我们创建一些虚拟数据:

import numpy as np
np.random.seed(1)

keys = ["foo"] * 10 + ["bar"] * 10
values = np.hstack([np.random.normal(0, 1, 10), np.random.normal(10, 1, 100)])

df = sqlContext.createDataFrame([
   {"k": k, "v": round(float(v), 3)} for k, v in zip(keys, values)])

Make sure you're using HiveContext (Spark < 2.0 only):

确保您使用的是HiveContext(仅限Spark <2.0):

from pyspark.sql import HiveContext

assert isinstance(sqlContext, HiveContext)

Create a window:

创建一个窗口:

from pyspark.sql.window import Window

w =  Window.partitionBy(df.k).orderBy(df.v)

which is equivalent to

这相当于

(PARTITION BY k ORDER BY v) 

in SQL.

在SQL中。

As a rule of thumb window definitions should always contain PARTITION BY clause otherwise Spark will move all data to a single partition. ORDER BY is required for some functions, while in different cases (typically aggregates) may be optional.

根据经验,窗口定义应始终包含PARTITION BY子句,否则Spark会将所有数据移动到单个分区。某些功能需要ORDER BY,而在不同情况下(通常为聚合)可能是可选的。

There are also two optional which can be used to define window span - ROWS BETWEEN and RANGE BETWEEN. These won't be useful for us in this particular scenario.

还有两个可选项,可用于定义窗口跨度 - ROWS BETWEEN和RANGE BETWEEN。在这种特定情况下,这些对我们没用。

Finally we can use it for a query:

最后,我们可以将它用于查询:

from pyspark.sql.functions import percentRank, ntile

df.select(
    "k", "v",
    percentRank().over(w).alias("percent_rank"),
    ntile(3).over(w).alias("ntile3")
)

Note that ntile is not related in any way to the quantiles.

请注意,ntile与分位数无关。