我正在尝试将 aws s3 中的数据读取到 Java 中的 dataset/rdd 中,但获取Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/StreamCapabilities. 我在 IntelliJ 上用 Java 运行 Spark 代码,因此在 pom.xml 中也添加了 Hadoop 依赖项下面是我的代码和 pom.xml 文件。import org.apache.spark.api.java.JavaRDD;import org.apache.spark.sql.SparkSession;import org.apache.spark.api.java.JavaSparkContext;public class SparkJava { public static void main(String[] args){ SparkSession spark = SparkSession .builder() .master("local") .config("spark.hadoop.fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem") .config("spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version", "2") .config("fs.s3n.awsAccessKeyId", AWS_KEY) .config("fs.s3n.awsSecretAccessKey", AWS_SECRET_KEY) .getOrCreate(); JavaSparkContext sc = new JavaSparkContext(spark.sparkContext()); String input_path = "s3a://bucket/2018/07/28/zqa.parquet"; Dataset<Row> dF = spark.read().load(input_path); // THIS LINE CAUSES ERROR }}这是 pom.xml 中的依赖项<dependencies> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.11</artifactId> <version>2.3.1</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.11</artifactId> <version>2.3.1</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-aws</artifactId> <version>3.1.1</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-hdfs</artifactId> <version>3.1.1</version> </dependency></dependencies>任何帮助将不胜感激。提前致谢!
1 回答
梵蒂冈之花
TA贡献1900条经验 获得超5个赞
通过添加流动依赖解决了这个问题:
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>3.1.1</version>
</dependency>
添加回答
举报
0/150
提交
取消