2 回答
TA贡献1869条经验 获得超4个赞
单程:
from pyspark.sql import Row
rdd = sc.parallelize([(20190701, [11,21,31], [('A',10), ('B', 20)])])
# customize a Row class based on schema
MRow = Row("date", "0", "1", "2", "A", "B")
rdd.map(lambda x: MRow(x[0], *x[1], *map(lambda e:e[1],x[2]))).toDF().show()
+--------+---+---+---+---+---+
| date| 0| 1| 2| A| B|
+--------+---+---+---+---+---+
|20190701| 11| 21| 31| 10| 20|
+--------+---+---+---+---+---+
或者另一种方式:
rdd.map(lambda x: Row(date=x[0], **dict((str(i), e) for i,e in list(enumerate(x[1])) + x[2]))).toDF().show()
+---+---+---+---+---+--------+
| 0| 1| 2| A| B| date|
+---+---+---+---+---+--------+
| 11| 21| 31| 10| 20|20190701|
+---+---+---+---+---+--------+
TA贡献1836条经验 获得超5个赞
rdd = sc.parallelize((20190701, [11,21,31], [('A',10), ('B', 20)]))
elements = rdd.take(3)
a = [elements[0]] + (elements[1]) + [elements[2][0][1], elements[2][1][1]]
import pandas as pd
sdf = spark.createDataFrame(pd.DataFrame([20190701, 11, 21, 31, 10, 20]).T)
添加回答
举报