我正在尝试在 pyspark (Spark 2.4.5) 中应用非常简单的 Pandas UDF,但它对我不起作用。例子:pyspark --master local[4] --conf "spark.pyspark.python=/opt/anaconda/envs/bd9/bin/python3" --conf "spark.pyspark.driver.python=/opt/anaconda/envs/bd9/bin/python3" >>> my_df = spark.createDataFrame(... [... (1, 0),... (2, 1),... (3, 1),... ],... ["uid", "partition_id"]... )from pyspark.sql.types import StructType, StructField, StringTypeschema = StructType([StructField("uid", StringType())])from pyspark.sql.functions import pandas_udf, PandasUDFTypeimport pandas>>> @pandas_udf(schema, PandasUDFType.GROUPED_MAP)... def apply_model(sample_df):... print(sample_df)... return pandas.DataFrame({"uid": sample_df["uid"]})...>>> result = my_df.groupBy("partition_id").apply(apply_model)>>> result.show() uid partition_id0 1 0[Stage 13:==================================================> (92 + 4) / 100] uid partition_id0 2 11 3 1+---+|uid|+---+| || || |+---+不知何故 uid 没有反映在结果中。你能说我在这里缺少什么吗?
添加回答
举报
0/150
提交
取消