spark框架是用scala写的,运行在Java虚拟机(JVM)上。支持Python、Java、Scala或R多种语言编写客户端应用。
下载Spark
访问http://spark.apache.org/downloads.html选择预编译的版本进行下载。
解压Spark
打开终端,将工作路径转到下载的spark压缩包所在的目录,然后解压压缩包。
可使用如下命令:
cd ~ tar -xf spark-2.2.2-bin-hadoop2.7.tgz -C /opt/module/cd spark-2.2.2-bin-hadoop2.7 ls
注:tar命令中x标记指定tar命令执行解压缩操作,f标记指定压缩包的文件名。
spark主要目录结构
README.md
包含用来入门spark的简单使用说明
bin
包含可用来和spark进行各种方式交互的一系列可执行文件
core、streaming、python
包含spark项目主要组件的源代码
examples
包含一些可查看和运行的spark程序,对学习spark的API非常有帮助
运行案例及交互式Shell
运行案例
./bin/run-example SparkPi 10
scala shell
./bin/spark-shell --master local[2] # --master选项指定运行模式。local是指使用一个线程本地运行;local[N]是指使用N个线程本地运行。
python shell
./bin/pyspark --master local[2]
R shell
./bin/sparkR --master local[2]
提交应用脚本
#支持多种语言提交./bin/spark-submit examples/src/main/python/pi.py 10 ./bin/spark-submit examples/src/main/r/dataframe.R ...
使用spark shell进行交互式分析
scala
使用spark-shell脚本进行交互式分析。
基础
scala> val textFile = spark.read.textFile("README.md") textFile: org.apache.spark.sql.Dataset[String] = [value: string] scala> textFile.count() // Number of items in this Datasetres0: Long = 126 // May be different from yours as README.md will change over time, similar to other outputsscala> textFile.first() // First item in this Datasetres1: String = # Apache Spark#使用filter算子返回原DataSet的子集scala> val linesWithSpark = textFile.filter(line => line.contains("Spark")) linesWithSpark: org.apache.spark.sql.Dataset[String] = [value: string]#拉链方式scala> textFile.filter(line => line.contains("Spark")).count() // How many lines contain "Spark"?res3: Long = 15
进阶
#使用DataSet的转换和动作查找最多单词的行scala> textFile.map(line => line.split(" ").size).reduce((a, b) => if (a > b) a else b) res4: Long = 15
#统计单词个数scala> val wordCounts = textFile.flatMap(line => line.split(" ")).groupByKey(identity).count() wordCounts: org.apache.spark.sql.Dataset[(String, Long)] = [value: string, count(1): bigint] scala> wordCounts.collect() res6: Array[(String, Int)] = Array((means,1), (under,2), (this,3), (Because,1), (Python,2), (agree,1), (cluster.,1), ...)
python
使用pyspark脚本进行交互式分析
基础
>>> textFile = spark.read.text("README.md")>>> textFile.count() # Number of rows in this DataFrame126>>> textFile.first() # First row in this DataFrameRow(value=u'# Apache Spark')#filter过滤>>> linesWithSpark = textFile.filter(textFile.value.contains("Spark"))#拉链方式>>> textFile.filter(textFile.value.contains("Spark")).count() # How many lines contain "Spark"?15
进阶
#查找最多单词的行>>> from pyspark.sql.functions import *>>> textFile.select(size(split(textFile.value, "\s+")).name("numWords")).agg(max(col("numWords"))).collect() [Row(max(numWords)=15)]#统计单词个数>>> wordCounts = textFile.select(explode(split(textFile.value, "\s+")).alias("word")).groupBy("word").count()>>> wordCounts.collect() [Row(word=u'online', count=1), Row(word=u'graphs', count=1), ...]
独立应用
spark除了交互式运行之外,spark也可以在Java、Scala或Python的独立程序中被连接使用。
独立应用与shell的主要区别在于需要自行初始化SparkContext。
scala
分别统计包含单词a和单词b的行数
/* SimpleApp.scala */import org.apache.spark.sql.SparkSession object SimpleApp { def main(args: Array[String]) { val logFile = "YOUR_SPARK_HOME/README.md" // Should be some file on your system val spark = SparkSession.builder.appName("Simple Application").getOrCreate() val logData = spark.read.textFile(logFile).cache() val numAs = logData.filter(line => line.contains("a")).count() val numBs = logData.filter(line => line.contains("b")).count() println(s"Lines with a: $numAs, Lines with b: $numBs") spark.stop() } }
运行应用
# Use spark-submit to run your application$ YOUR_SPARK_HOME/bin/spark-submit \ --class "SimpleApp" \ --master local[4] \ target/scala-2.11/simple-project_2.11-1.0.jar ... Lines with a: 46, Lines with b: 23
java
分别统计包含单词a和单词b的行数
/* SimpleApp.java */import org.apache.spark.sql.SparkSession;import org.apache.spark.sql.Dataset;public class SimpleApp { public static void main(String[] args) { String logFile = "YOUR_SPARK_HOME/README.md"; // Should be some file on your system SparkSession spark = SparkSession.builder().appName("Simple Application").getOrCreate(); Dataset<String> logData = spark.read().textFile(logFile).cache(); long numAs = logData.filter(s -> s.contains("a")).count(); long numBs = logData.filter(s -> s.contains("b")).count(); System.out.println("Lines with a: " + numAs + ", lines with b: " + numBs); spark.stop(); } }
运行应用
# Use spark-submit to run your application$ YOUR_SPARK_HOME/bin/spark-submit \ --class "SimpleApp" \ --master local[4] \ target/simple-project-1.0.jar ... Lines with a: 46, Lines with b: 23
python
分别统计包含单词a和单词b的行数
setup.py脚本添加内容 install_requires=[ 'pyspark=={site.SPARK_VERSION}']
"""SimpleApp.py"""from pyspark.sql import SparkSession logFile = "YOUR_SPARK_HOME/README.md" # Should be some file on your systemspark = SparkSession.builder().appName(appName).master(master).getOrCreate() logData = spark.read.text(logFile).cache() numAs = logData.filter(logData.value.contains('a')).count() numBs = logData.filter(logData.value.contains('b')).count() print("Lines with a: %i, lines with b: %i" % (numAs, numBs)) spark.stop()
运行应用
# Use spark-submit to run your application$ YOUR_SPARK_HOME/bin/spark-submit \ --master local[4] \ SimpleApp.py ... Lines with a: 46, Lines with b: 23
作者:java大数据编程
链接:https://www.jianshu.com/p/49ee7475c96d
点击查看更多内容
为 TA 点赞
评论
共同学习,写下你的评论
评论加载中...
作者其他优质文章
正在加载中
感谢您的支持,我会继续努力的~
扫码打赏,你说多少就多少
赞赏金额会直接到老师账户
支付方式
打开微信扫一扫,即可进行扫码打赏哦