2 回答
TA贡献1856条经验 获得超11个赞
您可以使用size来确定数组列的长度和如下用途window:
导入并创建示例 DataFrame
import pyspark.sql.functions as f
from pyspark.sql.window import Window
df = spark.createDataFrame([('Joe','Smith',[2,3]),
('Joe','Smith',[2,3,5,6]),
('Jim','Bush',[9,7]),
('Jim','Bush',[21]),
('Sarah','Wood',[2,3])], ('first_name','last_name','requests_ID'))
定义窗口以requests_ID根据列的长度以降序获取列的行号。
在这里,f.size("requests_ID")将给出requests_ID列的长度并按desc()降序对其进行排序。
w_spec = Window().partitionBy("first_name", "last_name").orderBy(f.size("requests_ID").desc())
应用窗口函数并获取第一行。
df.withColumn("rn", f.row_number().over(w_spec)).where("rn ==1").drop("rn").show()
+----------+---------+------------+
|first_name|last_name| requests_ID|
+----------+---------+------------+
| Jim| Bush| [9, 7]|
| Sarah| Wood| [2, 3]|
| Joe| Smith|[2, 3, 5, 6]|
+----------+---------+------------+
TA贡献1799条经验 获得超6个赞
要完成您当前的 df 看起来像这样,
----------------------------------------
first_name | last_name | requests_ID |
----------------------------------------
Joe | Smith |[[9,7],[2,3,5,6]]|
----------------------------------------
Jim | Bush |[[9,7],[21]] |
----------------------------------------
Sarah | Wood |[2,3] |
----------------------------------------
尝试这个,
import pyspark.sql.functions as F
from pyspark.sql.types import IntegerType, ArrayType
def myfunc(x):
temp = []
for _ in x:
temp.append(len(x))
max_ind = temp.index(max(temp))
return x[max_ind]
udf_extract = F.udf(myfunc, ArrayType(IntegerType()))
df = df.withColumn('new_requests_ID', udf_extract('requests_ID'))
#df.show()
或者,没有变量声明,
import pyspark.sql.functions as F
@F.udf
def myfunc(x):
temp = []
for _ in x:
temp.append(len(x))
max_ind = temp.index(max(temp))
return x[max_ind]
df = df.withColumn('new_requests_ID', myfunc('requests_ID'))
#df.show()
添加回答
举报