这是我使用的代码:df = Nonefrom pyspark.sql.functions import litfor category in file_list_filtered: data_files = os.listdir('HMP_Dataset/'+category) for data_file in data_files: print(data_file) temp_df = spark.read.option('header', 'false').option('delimiter', ' ').csv('HMP_Dataset/'+category+'/'+data_file, schema = schema) temp_df = temp_df.withColumn('class', lit(category)) temp_df = temp_df.withColumn('source', lit(data_file)) if df is None: df = temp_df else: df = df.union(temp_df)我得到了这个错误:NameError Traceback (most recent call last)<ipython-input-4-4296b4e97942> in <module> 9 for data_file in data_files: 10 print(data_file)---> 11 temp_df = spark.read.option('header', 'false').option('delimiter', ' ').csv('HMP_Dataset/'+category+'/'+data_file, schema = schema) 12 temp_df = temp_df.withColumn('class', lit(category)) 13 temp_df = temp_df.withColumn('source', lit(data_file))NameError: name 'spark' is not defined我该如何解决?
2 回答

慕工程0101907
TA贡献1887条经验 获得超5个赞
初始化 Spark Session,然后spark在您的循环中使用。
df = None
from pyspark.sql.functions import lit
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('app_name').getOrCreate()
for category in file_list_filtered:
...

小怪兽爱吃肉
TA贡献1852条经验 获得超1个赞
尝试定义sparkvar
from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
sc = SparkContext('local')
spark = SparkSession(sc)
添加回答
举报
0/150
提交
取消