spark的集群配置
这里我们介绍 standalone 模式环境搭建
首先在所有的节点上安装spark
参考文章安装 spark
https://www.jianshu.com/p/b0d88e5dd503
到 spark的conf的目录下,查看配置文件
[river@s201 spark]$ cd /soft/spark/conf/ [river@s201 conf]$ ll total 36-rw-r--r--. 1 river river 996 Oct 29 14:36 docker.properties.template-rw-r--r--. 1 river river 1105 Oct 29 14:36 fairscheduler.xml.template-rw-r--r--. 1 river river 2025 Oct 29 14:36 log4j.properties.template-rw-r--r--. 1 river river 7801 Oct 29 14:36 metrics.properties.template-rw-r--r--. 1 river river 865 Oct 29 14:36 slaves.template-rw-r--r--. 1 river river 1292 Oct 29 14:36 spark-defaults.conf.template-rwxr-xr-x. 1 river river 4221 Oct 29 14:36 spark-env.sh.template[river@s201 conf]$
配置master节点的slaves
修改 该目录下 slaves.template 文件为 slaves
mv slaves.template slaves
编辑slaves 文件
添加从节点主机名
s202 s203 s204
我的hosts 配置如下
[river@s201 conf]$ cat /etc/hosts127.0.0.1 localhost192.168.172.201 s201192.168.172.202 s202192.168.172.203 s203192.168.172.204 s204
同步slaves 文件到从节点
[river@s201 conf]$ scp slaves river@s202:/soft/spark/conf/ slaves 100% 871 736.9KB/s 00:00 [river@s201 conf]$ scp slaves river@s203:/soft/spark/conf/slaves 100% 871 535.0KB/s 00:00 [river@s201 conf]$ scp slaves river@s204:/soft/spark/conf/slaves 100% 871 530.9KB/s 00:00 [river@s201 conf]$
接下来启动spark集群
[river@s201 conf]$ /soft/spark/sbin/start-all.sh starting org.apache.spark.deploy.master.Master, logging to /soft/spark/logs/spark-river-org.apache.spark.deploy.master.Master-1-s201.outs204: starting org.apache.spark.deploy.worker.Worker, logging to /soft/spark/logs/spark-river-org.apache.spark.deploy.worker.Worker-1-s204.outs202: starting org.apache.spark.deploy.worker.Worker, logging to /soft/spark/logs/spark-river-org.apache.spark.deploy.worker.Worker-1-s202.outs203: starting org.apache.spark.deploy.worker.Worker, logging to /soft/spark/logs/spark-river-org.apache.spark.deploy.worker.Worker-1-s203.outs201: starting org.apache.spark.deploy.worker.Worker, logging to /soft/spark/logs/spark-river-org.apache.spark.deploy.worker.Worker-1-s201.out
通过jps 查看 进程
[river@s201 conf]$ jps66208 Jps2148 SecondaryNameNode2310 ResourceManager1943 NameNode66072 Master66156 Worker [river@s202 soft]$ jps1781 DataNode49061 Jps48987 Worker1902 NodeManager
可以看到 主节点上已经有 master 和 worker了,从节点也有了 worker
说明已经启动成功了
我们再登录到web来看看集群状况
可以看到节点都已经正常运行了
image.png
感谢你的阅读,喜欢的话请点赞哦。
作者:良人与我
链接:https://www.jianshu.com/p/aee59bcafc6a
点击查看更多内容
为 TA 点赞
评论
共同学习,写下你的评论
评论加载中...
作者其他优质文章
正在加载中
感谢您的支持,我会继续努力的~
扫码打赏,你说多少就多少
赞赏金额会直接到老师账户
支付方式
打开微信扫一扫,即可进行扫码打赏哦