1 回答
TA贡献1851条经验 获得超3个赞
这可以通过 UnPivoting 的技巧来完成...
假设您有如下数据集.. 我们称它为患者的测试结果.. A、B、C 列表示.. 测试类型 A 、测试类型 B ......并且这些列中的值表示数字测试结果
+-------------+---+---+---+---+---+---+---+
|PatientNumber| A| B| C| D| E| F| G|
+-------------+---+---+---+---+---+---+---+
| 101| 1| 2| 3| 4| 5| 6| 7|
| 102| 11| 12| 13| 14| 15| 16| 17|
+-------------+---+---+---+---+---+---+---+
我添加了一个 PatientNumber 列,只是为了让数据看起来更合理。您可以从代码中删除它。
我将此数据集添加到 csv..
val testDF = spark.read.format("csv").option("header", "true").load("""C:\TestData\CSVtoJSon.csv""")
让我们创建 2 个数组,一个用于 id 列,另一个用于所有测试类型。
val idCols = Array("PatientNumber")
val valCols = testDF.columns.diff(idCols)
然后这是 Unpivot 的代码
val valcolNames = valCols.map(x => List(''' + x + ''', x))
val unPivotedDF = testDF.select($"PatientNumber", expr(s"""stack(${valCols.size},${valcolNames.flatMap(x => x).mkString(",")} ) as (Type,Value)"""))
这是 Unpivoted 数据的样子 -
+-------------+----+-----+
|PatientNumber|Type|Value|
+-------------+----+-----+
| 101| A| 1|
| 101| B| 2|
| 101| C| 3|
| 101| D| 4|
| 101| E| 5|
| 101| F| 6|
| 101| G| 7|
| 102| A| 11|
| 102| B| 12|
| 102| C| 13|
| 102| D| 14|
| 102| E| 15|
| 102| F| 16|
| 102| G| 17|
+-------------+----+-----+
最后将这个 Unpivoted DF 写为 Json -
unPivotedDF.coalesce(1).write.format("json").mode("Overwrite").save("""C:\TestData\output""")
Json 文件的内容看起来与您想要的结果相同 -
{"PatientNumber":"101","Type":"A","Value":"1"}
{"PatientNumber":"101","Type":"B","Value":"2"}
{"PatientNumber":"101","Type":"C","Value":"3"}
{"PatientNumber":"101","Type":"D","Value":"4"}
{"PatientNumber":"101","Type":"E","Value":"5"}
{"PatientNumber":"101","Type":"F","Value":"6"}
{"PatientNumber":"101","Type":"G","Value":"7"}
{"PatientNumber":"102","Type":"A","Value":"11"}
{"PatientNumber":"102","Type":"B","Value":"12"}
{"PatientNumber":"102","Type":"C","Value":"13"}
{"PatientNumber":"102","Type":"D","Value":"14"}
{"PatientNumber":"102","Type":"E","Value":"15"}
{"PatientNumber":"102","Type":"F","Value":"16"}
{"PatientNumber":"102","Type":"G","Value":"17"}
添加回答
举报