1 回答
TA贡献1809条经验 获得超8个赞
您可以传递字典以创建DataFrame函数。
l = [{'a': 1, 'b': 2, 'c': 3}, {'b': 4, 'c': 5, 'd': 6, 'e': 7}]
df = spark.createDataFrame(l)
#UserWarning: inferring schema from dict is deprecated,please use pyspark.sql.Row instead
#warnings.warn("inferring schema from dict is deprecated
df.show()
+----+---+---+----+----+
| a| b| c| d| e|
+----+---+---+----+----+
| 1| 2| 3|null|null|
|null| 4| 5| 6| 7|
+----+---+---+----+----+
此外,还为列提供,因为不推荐使用字典的架构推理。使用对象创建数据框要求所有字典具有相同的列。schemaRow
通过合并涉及的所有字典中的键,以编程方式定义架构。
from pyspark.sql.types import StructType,StructField,IntegerType
#Function to merge keys from several dicts
def merge_keys(*dict_args):
result = set()
for dict_arg in dict_args:
for key in dict_arg.keys():
result.add(key)
return sorted(list(result))
#Generate schema given a column list
def generate_schema(columns):
result = StructType()
for column in columns:
result.add(column,IntegerType(),nullable=True) #change type and nullability as needed
return result
df = spark.createDataFrame(l,schema=generate_schema(merge_keys(*l)))
添加回答
举报