我使用AWS EC2指南安装了Spark,并且可以使用bin/pyspark脚本很好地启动该程序以到达Spark 提示,并且还可以成功执行快速入门Quide。但是,我无法终生解决如何INFO在每个命令后停止所有冗长的日志记录。我在下面的代码(注释掉,设置为OFF)中的几乎所有可能的情况下都尝试了该log4j.properties文件,该conf文件位于我从中以及在每个节点上启动应用程序的文件夹中,没有任何反应。INFO执行每个语句后,我仍然可以打印日志记录语句。我对应该如何工作感到非常困惑。#Set everything to be logged to the console log4j.rootCategory=INFO, console log4j.appender.console=org.apache.log4j.ConsoleAppender log4j.appender.console.target=System.err log4j.appender.console.layout=org.apache.log4j.PatternLayout log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n# Settings to quiet third party logs that are too verboselog4j.logger.org.eclipse.jetty=WARNlog4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFOlog4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO这是我使用时的完整类路径SPARK_PRINT_LAUNCH_COMMAND:Spark命令:/Library/Java/JavaVirtualMachines/jdk1.8.0_05.jdk/Contents/Home/bin/java -cp:/root/spark-1.0.1-bin-hadoop2/conf:/root/spark-1.0.1 -bin-hadoop2 / conf:/root/spark-1.0.1-bin-hadoop2/lib/spark-assembly-1.0.1-hadoop2.2.0.jar:/root/spark-1.0.1-bin-hadoop2/lib /datanucleus-api-jdo-3.2.1.jar:/root/spark-1.0.1-bin-hadoop2/lib/datanucleus-core-3.2.2.jar:/root/spark-1.0.1-bin-hadoop2 /lib/datanucleus-rdbms-3.2.1.jar -XX:MaxPermSize = 128m -Djava.library.path = -Xms512m -Xmx512m org.apache.spark.deploy.Spark提交spark-shell --class org.apache.spark。代表主的内容spark-env.sh:#!/usr/bin/env bash# This file is sourced when running various Spark programs.# Copy it as spark-env.sh and edit that to configure Spark for your site.# Options read when launching programs locally with # ./bin/run-example or ./bin/spark-submit# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node# - SPARK_PUBLIC_DNS, to set the public dns name of the driver program# - SPARK_CLASSPATH=/root/spark-1.0.1-bin-hadoop2/conf/
添加回答
举报
0/150
提交
取消