1.ssh免密码登录设置
[hadoop@master ~]$ ssh -version OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013Bad escape character 'rsion'.
查看ssh的版本后,如果ssh未安装则需要执行如下安装命令:
[hadoop@master ~]$ sudo yum install openssh-server
在每台机器上都执行一次下面的命令:
$ ssh-keygen –t rsa #一路回车,提示要填的都默认不填,按回车上面执行完成后,每台机器上都会生成一个~/.ssh文件夹 $ ll ~/.ssh #查看.ssh文件下的文件列表-rw-------. 1 hadoop hadoop 1580 Apr 18 16:53 authorized_keys -rw-------. 1 hadoop hadoop 1675 Apr 15 16:01 id_rsa -rw-r--r--. 1 hadoop hadoop 395 Apr 15 16:01 id_rsa.pub
把slave1,slave2,slave3上生成的公钥id_rsa.pub发给master机器:
在slave1机器上:
[hadoop@slave1 ~]$ scp ~/.ssh/id_rsa.pub hadoop@master:~/.ssh/id_rsa.pub.slave1
在slave2机器上:
[hadoop@slave2 ~]$ scp ~/.ssh/id_rsa.pub hadoop@master:~/.ssh/id_rsa.pub.slave2
在slave3机器上:
[hadoop@slave3 ~]$ scp ~/.ssh/id_rsa.pub hadoop@master:~/.ssh/id_rsa.pub.slave3
在master机器上,将所有公钥加到新增的用于认证的公钥文件authorized_keys中:
[hadoop@master ~]$ cat ~/.ssh/id_rsa.pub* >> ~/.ssh/authorized_keys
需要修改文件authorized_keys的权限(权限的设置非常重要,因为不安全的设置安全设置,会让你不能使用RSA功能 )
[hadoop@master ~]$ chmod 600 ~/.ssh/authorized_keys #如果免密码不成功有可能缺少这步
将公钥文件authorized_keys分发给每台slave:
[hadoop@master ~]$ scp ~/.ssh/authorized_keys hadoop@slave1:~/.ssh/ [hadoop@master ~]$ scp ~/.ssh/authorized_keys hadoop@slave1:~/.ssh/ [hadoop@master ~]$ scp ~/.ssh/authorized_keys hadoop@slave1:~/.ssh/
2.Java环境的安装
下载jdk-8u60-linux-x64.tar.gz安装包后(放在~/bigdataspace路径下):
[hadoop@master ~]$ cd ~/bigdataspace [hadoop@master bigdataspace]$ tar -zxvf jdk-8u60-linux-x64.tar.gz
修改环境变量配置文件:
[hadoop@master bigdataspace]$ sudo vi /etc/profile (在配置文件末尾加上如下配置) export JAVA_HOME=/home/hadoop/bigdataspace/jdk1.8.0_60export PATH=$JAVA_HOME/bin:$PATH export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
让环境变量设置生效:
[hadoop@master bigdataspace]$ source /etc/profile
验证Java是否安装成功:
[hadoop@master bigdataspace]$ java -version java version "1.8.0_60"Java(TM) SE Runtime Environment (build 1.8.0_60-b27) Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
(每台机器上都需要按照上面的操作安装Java)
每台机器上执行:
[hadoop@master ~]$ sudo chmod 777 /data/ #让所有用户可操作/data目录下的数据
3.集群上的机器实现同步时间
检查时间服务是否安装:
[hadoop@master ~]$ rpm -q ntp ntp-4.2.6p5-1.el6.centos.x86_64 #这表示已安装了,如果没有安装,这是空白
如果没有安装,需要执行下面的安装命令:
[hadoop@master ~]$ sudo yum install ntp
需要配置NTP服务为自启动:
[hadoop@master ~]$ sudo chkconfig ntpd on [hadoop@master ~]$ chkconfig --list ntpd ntpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off(需要打开master机器上udp协议的123端口是为了其他节点使用ntpdate通过该端口同步master机器的时间) [hadoop@master ~]$ sudo vi /etc/sysconfig/iptables (新增的端口配置) -A INPUT -m state --state NEW -m udp -p udp --dport 123 -j ACCEPT [hadoop@master ~]$ sudo service iptables restart
在配置前,先使用ntpdate手动同步下时间,免得本机与外部时间服务器时间差距太大,让ntpd不能正常同步。
[hadoop@master ~]$ sudo ntpdate pool.ntp.org26 Apr 17:12:15 ntpdate[7376]: step time server 202.112.29.82 offset 13.827386 sec
更改master机器上的相关配置文件:
[hadoop@master ~]$ sudo vim /etc/ntp.conf
(下面只显示修改的必要项)# Hosts on local network are less restricted.restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap#让同一局域网ip段可以进行时间同步:restrict 10.3.19.0 mask 255.255.255.0 nomodify notrap# Use public servers from the pool.ntp.org project.# Please consider joining the pool (http://www.pool.ntp.org/join.html).#server 0.centos.pool.ntp.org iburst#server 1.centos.pool.ntp.org iburst#server 2.centos.pool.ntp.org iburst#server 3.centos.pool.ntp.org iburst#外部时间服务器server pool.ntp.org iburst server 0.asia.pool.ntp.org iburst server 1.asia.pool.ntp.org iburst server 1.asia.pool.ntp.org iburst server 2.asia.pool.ntp.org iburst#broadcast 192.168.1.255 autokey # broadcast server#broadcastclient # broadcast client#broadcast 224.0.1.1 autokey # multicast server#multicastclient 224.0.1.1 # multicast client#manycastserver 239.255.254.254 # manycast server#manycastclient 239.255.254.254 autokey # manycast client# allow update time by the upper server# Undisciplined Local Clock. This is a fake driver intended for backup# and when no outside source of synchronized time is available.# 外部时间服务器不可用时,以本地时间作为时间服务server 127.127.1.0 fudge 127.127.1.0 stratum 10#############################################################其他节点/etc/ntp.conf(slave1,slave2,slave3)的配置: ……..#server 3.centos.pool.ntp.org iburst#外部时间服务器,以master时间为准进行同步server master iburst ……..
[hadoop@master ~]$ sudo service ntpd start (每台机器上都需要,设置ntpd开机启动,并第一次手动打开ntpd),命令如下: $ sudo chkconfig ntpd on #开机启动ntpd$ sudo service ntpd start #启动 ntpd
时间同步设置参考:http://cn.soulmachine.me/blog/20140124/
时间同步设置总结:
每个节点上安装ntpd,并设置为开机启动,当然第一次要先手动启动,通过配置/etc/ntp.conf文件,让master作为时间同步服务器,这台机器的时间是根据联网同步网络时间的,其他节点以master的ip作为同步的地址
配置完成后,发现后面的节点时间可能还未同步,可能需要等30分钟左右,一段时间后时间都会以master为准,进行同步
4.Hadoop的安装、配置
下载hadoop-2.6.0-cdh5.5.0.tar.gz安装包后(放在master机器上的~/bigdataspace路径下):
[hadoop@master ~]$ cd ~/bigdataspace [hadoop@master bigdataspace]$ tar -zxvf hadoop-2.6.0-cdh5.5.0.tar.gz
进入hadoop配置文件路径:
[hadoop@master ~]$ cd ~/bigdataspace/hadoop-2.6.0-cdh5.5.0/etc/hadoop
1> 在hadoop-env.sh中配置JAVA_HOME:
[hadoop@master hadoop]$ vi hadoop-env.sh
# set JAVA_HOME in this file, so that it is correctly defined on# The java implementation to use.export JAVA_HOME=/home/hadoop/bigdataspace/jdk1.8.0_60
2> 在yarn-env.sh中配置JAVA_HOME:
[hadoop@master hadoop]$ vi yarn-env.sh
# some Java parametersexport JAVA_HOME=/home/hadoop/bigdataspace/jdk1.8.0_60
3> 在slaves中配置slave节点的ip或者host
[hadoop@master hadoop]$ vi slaves
slave1 slave2 slave3
4> 修改core-site.xml
[hadoop@master hadoop]$ vi core-site.xml
<!-- Put site-specific property overrides in this file. --><configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:8020</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/data/hadoop-2.6.0-cdh5.5.0/tmp</value> </property></configuration>
5> 修改hdfs-site.xml
[hadoop@master hadoop]$ vi hdfs-site.xml
<!-- Put site-specific property overrides in this file. --><configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>master:50090</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/data/hadoop-2.6.0-cdh5.5.0/dfs/name</value> </property> <property> <name>dfs.namenode.data.dir</name><name>dfs.datanode.data.dir</name> <value>file:/data/hadoop-2.6.0-cdh5.5.0/dfs/data</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property></configuration>
6> 修改mapred-site.xml
[hadoop@master hadoop]$ vi mapred-site.xml
<!-- Put site-specific property overrides in this file. --><configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>master:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>master:19888</value> </property></configuration>
7> 修改yarn-site.xml
[hadoop@master hadoop]$ vi yarn-site.xml
<configuration><!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>master:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>master:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>master:8088</value> </property></configuration>
因为CDH版本缺少hadoop的native库,因此需要引入,否则会报错,解决方法:
http://www.cnblogs.com/huaxiaoyao/p/5046374.html
本次安装具体采取的解决方法:
[hadoop@master ~]$ cd ~/bigdataspace [hadoop@master bigdataspace]$ wget http://archive.cloudera.com/cdh5/redhat/6/x86_64/cdh/5.5.0/RPMS/x86_64/hadoop-2.6.0+cdh5.5.0+921-1.cdh5.5.0.p0.15.el6.x86_64.rpm[hadoop@master bigdataspace]$ rpm2cpio *.rpm | cpio -div
在bigdataspace文件夹下
$ cp -r ./usr/lib/hadoop/lib/native/ ~/bigdataspace/hadoop-2.6.0-cdh5.5.0/lib/native/
删除解压后得到的文件:
[hadoop@master bigdataspace]$ rm -r ~/bigdataspace/etc/ [hadoop@master bigdataspace]$ rm -r ~/bigdataspace/usr/ [hadoop@master bigdataspace]$ rm -r ~/bigdataspace/var//$ rm ~/ bigdataspace/hadoop-2.6.0+cdh5.5.0+921-1.cdh5.5.0.p0.15.el6.x86_64.rpm
5.使用scp命令分发配置好的hadoop到各个子节点
$ scp –r ~/bigdataspace/hadoop-2.6.0-cdh5.5.0/ hadoop@slave1:~/bigdataspace/$ scp –r ~/bigdataspace/hadoop-2.6.0-cdh5.5.0/ hadoop@slave2:~/bigdataspace/$ scp –r ~/bigdataspace/hadoop-2.6.0-cdh5.5.0/ hadoop@slave3:~/bigdataspace/
(每台机器)修改环境变量配置文件:
[hadoop@master bigdataspace]$ sudo vi /etc/profile
(在配置文件末尾加上如下配置)
export HADOOP_HOME=/home/hadoop/bigdataspace/hadoop-2.6.0-cdh5.5.0export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH
让环境变量设置生效:
[hadoop@master bigdataspace]$ source /etc/profile
6.启动并验证Hadoop
[hadoop@master ~]$ cd ~/bigdataspace/hadoop-2.6.0-cdh5.5.0 #进入hadoop目录 [hadoop@master hadoop-2.6.0-cdh5.5.0]$ ./bin/hdfs namenode –format #格式化namenode[hadoop@master hadoop-2.6.0-cdh5.5.0]$ ./sbin/start-dfs.sh #启动dfs[hadoop@master hadoop-2.6.0-cdh5.5.0]$ ./sbin/start-yarn.sh #启动yarn
可以通过jps命令查看各个节点启动的进程是否正常。在 master 上应该有以下几个进程
[hadoop@master hadoop-2.6.0-cdh5.5.0]$ jps3407 SecondaryNameNode3218 NameNode3552 ResourceManager3910 Jps
在 slave1 上应该有以下几个进程
[hadoop@slave1 ~]$ jps2072 NodeManager2213 Jps1962 DataNode
或者在浏览器中输入 http://master:8088 ,应该有 hadoop 的管理界面出来了,并通过http://master:8088/cluster/nodes能看到 slave1、slave2、slave3节点
7.启动Hadoop自带的jobhistoryserver
[hadoop@master ~] sbin/mr-jobhistory-daemon.sh start historyserver
(mapred-site.xml配置文件有对jobhistory的相关配置)
[hadoop@master hadoop-2.6.0-cdh5.5.0]$ jps
5314 Jps
19994 JobHistoryServer
19068 NameNode
19422 ResourceManager
19263 SecondaryNameNode
参考:
http://blog.csdn.net/liubei_whut/article/details/42397985
8.停止hadoop集群的问题
Linux运行一段时间后,/tmp下的文件夹下面会清空一些文件,hadoop的停止脚本stop-all.sh是需要根据/tmp下面的pid文件关闭对应的进程,当/tmp下的文件被自动清理后可能会出出先的错误:
$ ./sbin/stop-all.sh Stopping namenodes on [master]master: no namenode to stopslave1: no datanode to stopslave2: no datanode to stopslave3: no datanode to stop Stopping secondary namenodes [master]master: no secondarynamenode to stop ……
方法1:这时需要在/tmp文件夹下手动创建恢复这些pid文件
master节点(每个文件中保存对应的进程id):
hadoop-hadoop-namenode.pid
hadoop-hadoop-secondarynamenode.pid
yarn-hadoop-resourcemanager.pid
slave节点(每个文件中保存对应的进程id):
hadoop-hadoop-datanode.pid
yarn-hadoop-nodemanager.pid
方法2:使用kill -9逐个关闭相应的进程id
从根本上解决的方法:
(首先使用了方法1或方法2关闭了hadoop集群)
1.修改配置文件hadoop-env.sh:
#export HADOOP_PID_DIR=${HADOOP_PID_DIR}export HADOOP_PID_DIR=/data/hadoop-2.6.0-cdh5.5.0/pids#export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}export HADOOP_SECURE_DN_PID_DIR=/data/hadoop-2.6.0-cdh5.5.0/pids
2.修改配置文件yarn-env.sh:
export YARN_PID_DIR=/data/hadoop-2.6.0-cdh5.5.0/pids
3.创建文件夹pids:
$ mkdir /data/hadoop-2.6.0-cdh5.5.0/pids(发现会自动创建pids文件,因此不需要创建)
作者:抹布先生M
链接:https://www.jianshu.com/p/604f2be9fcd3
共同学习,写下你的评论
评论加载中...
作者其他优质文章