如何在pyspark中将Dataframe列从String类型更改为Double类型

时间:2022-11-01 01:38:02

I have a dataframe with column as String. I wanted to change the column type to Double type in pyspark.

我有一个数据框,列为String。我想在pyspark中将列类型更改为Double类型。

Following is the way, I did,-

以下是我的方式, -

toDoublefunc = UserDefinedFunction(lambda x: x,DoubleType())
changedTypedf = joindf.withColumn("label",toDoublefunc(joindf['show']))

Just wanted to know , is this the right way to do it as while running through Logistic Regression , I am getting some error, so I wonder , is this the reason for the trouble.

只是想知道,这是通过Logistic回归运行的正确方法,我遇到了一些错误,所以我想知道,这是问题的原因。

4 个解决方案

#1


68  

There is no need for an UDF here. Column already provides cast method with DataType instance:

这里不需要UDF。 Column已经为DataType实例提供了cast方法:

from pyspark.sql.types import DoubleType

changedTypedf = joindf.withColumn("label", joindf["show"].cast(DoubleType()))

or short string:

或短串:

changedTypedf = joindf.withColumn("label", joindf["show"].cast("double"))

#2


31  

Preserve the name of the column and avoid extra column addition by using the same name as input column:

保留列的名称,并使用与输入列相同的名称来避免额外的列添加:

changedTypedf = joindf.withColumn("show", joindf["show"].cast(DoubleType()))

#3


2  

the solution was simple -

解决方案很简单 -

toDoublefunc = UserDefinedFunction(lambda x: float(x),DoubleType())
changedTypedf = joindf.withColumn("label",toDoublefunc(joindf['show']))

#4


1  

Given answers are enough to deal with the problem but I want to share another way which may be introduced the new version of Spark (I am not sure about it) so given answer didn't catch it.

鉴于答案足以解决问题,但我想分享另一种方式,可能会引入新版本的Spark(我不确定)所以给出的答案没有抓住它。

We can reach the column in spark statement with col("colum_name") keyword:

我们可以使用col(“colum_name”)关键字到达spark语句中的列:

from pyspark.sql.functions import col , column
changedTypedf = joindf.withColumn("show", col("show").cast("double"))

#1


68  

There is no need for an UDF here. Column already provides cast method with DataType instance:

这里不需要UDF。 Column已经为DataType实例提供了cast方法:

from pyspark.sql.types import DoubleType

changedTypedf = joindf.withColumn("label", joindf["show"].cast(DoubleType()))

or short string:

或短串:

changedTypedf = joindf.withColumn("label", joindf["show"].cast("double"))

#2


31  

Preserve the name of the column and avoid extra column addition by using the same name as input column:

保留列的名称,并使用与输入列相同的名称来避免额外的列添加:

changedTypedf = joindf.withColumn("show", joindf["show"].cast(DoubleType()))

#3


2  

the solution was simple -

解决方案很简单 -

toDoublefunc = UserDefinedFunction(lambda x: float(x),DoubleType())
changedTypedf = joindf.withColumn("label",toDoublefunc(joindf['show']))

#4


1  

Given answers are enough to deal with the problem but I want to share another way which may be introduced the new version of Spark (I am not sure about it) so given answer didn't catch it.

鉴于答案足以解决问题,但我想分享另一种方式,可能会引入新版本的Spark(我不确定)所以给出的答案没有抓住它。

We can reach the column in spark statement with col("colum_name") keyword:

我们可以使用col(“colum_name”)关键字到达spark语句中的列:

from pyspark.sql.functions import col , column
changedTypedf = joindf.withColumn("show", col("show").cast("double"))