使用 Flink 1.7.1 为 kubernetes 上的单个作业集群构建它 flink 无法加载核心站点 xml,尽管它位于类路径上,导致忽略配置,但是,如果我放置 ENV 变量 AWS_SECRET_ACCESS_KEY AWS_ACCESS_KEY_ID 它通过找到它来工作,但如果我依赖 core-site.xml,它永远不会在没有 env 变量的情况下工作。我目前正在复制 Dockerfile 中显示的 core-site.xml 以及文档中所说的将 HADOOP_CONF_DIR 作为指向它的 env 变量。它仍然没有加载它,导致 NoCredentialsProvider。例外是:Caused by: org.apache.flink.fs.s3base.shaded.com.amazonaws.AmazonClientException: No AWS Credentials provided by BasicAWSCredentialsProvider EnvironmentVariableCredentialsProvider InstanceProfileCredentialsProvider : org.apache.flink.fs.s3base.shaded.com.amazonaws.SdkClientException: Unable to load credentials from service endpoint作业管理器/任务管理器加载的类路径- Classpath: /opt/flink-1.7.1/lib/aws-java-sdk-core-1.11.489.jar:/opt/flink-1.7.1/lib/aws-java-sdk-kms-1.11.489.jar:/opt/flink-1.7.1/lib/aws-java-sdk-s3-1.10.6.jar:/opt/flink-1.7.1/lib/flink-python_2.12-1.7.1.jar:/opt/flink-1.7.1/lib/flink-s3-fs-hadoop-1.7.1.jar:/opt/flink-1.7.1/lib/flink-shaded-hadoop2-uber-1.7.1.jar:/opt/flink-1.7.1/lib/hadoop-aws-2.8.0.jar:/opt/flink-1.7.1/lib/httpclient-4.5.6.jar:/opt/flink-1.7.1/lib/httpcore-4.4.11.jar:/opt/flink-1.7.1/lib/jackson-annotations-2.9.8.jar:/opt/flink-1.7.1/lib/jackson-core-2.9.8.jar:/opt/flink-1.7.1/lib/jackson-databind-2.9.8.jar:/opt/flink-1.7.1/lib/job.jar:/opt/flink-1.7.1/lib/joda-time-2.10.1.jar:/opt/flink-1.7.1/lib/log4j-1.2.17.jar:/opt/flink-1.7.1/lib/slf4j-log4j12-1.7.15.jar:/opt/flink-1.7.1/lib/flink-dist_2.12-1.7.1.jar::/hadoop/conf:
2 回答
POPMUISE
TA贡献1765条经验 获得超5个赞
好的,这已经解决了,如果你在类路径上有阴影 hadoop(将它从 /opt 移动到 /lib)你需要在 flink-conf 中指定你的键,但是现在我得到了以下异常
Caused by: java.io.IOException: org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider
constructor exception. A class specified in fs.s3a.aws.credentials.provider must provide a public constructor accepting URI and Configuration, or a public factory method named getInstance that accepts no arguments, or a public default constructor.
任何的想法?
添加回答
举报
0/150
提交
取消