我们专注攀枝花网站设计 攀枝花网站制作 攀枝花网站建设
成都网站建设公司服务热线:400-028-6601

网站建设知识

十年网站开发经验 + 多家企业客户 + 靠谱的建站团队

量身定制 + 运营维护+专业推广+无忧售后,网站问题一站解决

Hadoop2namenodeHA+联邦+ResourceManagerHA实验分析

本篇内容介绍了“Hadoop2 namenode HA+联邦+Resource Manager HA实验分析”的有关知识,在实际案例的操作过程中,不少人都会遇到这样的困境,接下来就让小编带领大家学习一下如何处理这些情况吧!希望大家仔细阅读,能够学有所成!

创新互联建站是一家专业从事网站建设、网站制作的网络公司。作为专业网站建设公司,创新互联建站依托的技术实力、以及多年的网站运营经验,为您提供专业的成都网站建设、营销型网站建设及网站设计开发服务!

实验的Hadoop版本为2.5.2,硬件环境是5台虚拟机,使用的均是CentOS6.6操作系统,虚拟机IP和hostname分别为:
192.168.63.171    node1.zhch
192.168.63.172    node2.zhch
192.168.63.173    node3.zhch
192.168.63.174    node4.zhch
192.168.63.175    node5.zhch

ssh免密码、防火墙、JDK这里就不在赘述了。虚拟机的角色分配是:
 node1为主namenode1、主resource manager、zookeeper、journalnode
 node2为备namendoe1、zookeeper、journalnode
 node3为主namenode2、备resource manager、zookeeper、journalnode、datanode
 node4为备namenode2、datanode
 node5为datanode

步骤和 Namenode HA的安装配置基本相同,需要先 安装zookeeper集群,主要的不同在于core-site.xml、hdfs-site.xml、yarn-site.xml配置文件,其余文件的配置和Namenode HA安装配置基本一致。

一、配置Hadoop

## 解压
[yyl@node1 program]$ tar -zxf hadoop-2.5.2.tar.gz
## 创建文件夹
[yyl@node1 program]$ mkdir hadoop-2.5.2/name
[yyl@node1 program]$ mkdir hadoop-2.5.2/data
[yyl@node1 program]$ mkdir hadoop-2.5.2/journal
[yyl@node1 program]$ mkdir hadoop-2.5.2/tmp
## 配置hadoop-env.sh
[yyl@node1 program]$ cd hadoop-2.5.2/etc/hadoop/
[yyl@node1 hadoop]$ vim hadoop-env.sh
export JAVA_HOME=/usr/lib/java/jdk1.7.0_80
## 配置yarn-env.sh
[yyl@node1 hadoop]$ vim yarn-env.sh
export JAVA_HOME=/usr/lib/java/jdk1.7.0_80
## 配置slaves
[yyl@node1 hadoop]$ vim slaves
node3.zhch
node4.zhch
node5.zhch
## 配置mapred-site.xml
[yyl@node1 hadoop]$ cp mapred-site.xml.template mapred-site.xml
[yyl@node1 hadoop]$ vim mapred-site.xml


  mapreduce.framework.name
  yarn

 
  mapreduce.jobhistory.address 
  node2.zhch:10020 
 
 
  mapreduce.jobhistory.webapp.address 
  node2.zhch:19888 



## 配置core-site.xml
[yyl@node1 hadoop]$ vim core-site.xml


  fs.defaultFS
  hdfs://mycluster


  io.file.buffer.size
  131072


  hadoop.tmp.dir
  file:/home/yyl/program/hadoop-2.5.2/tmp


  hadoop.proxyuser.hduser.hosts
  *


  hadoop.proxyuser.hduser.groups
  *


  ha.zookeeper.quorum
  node1.zhch:2181,node2.zhch:2181,node3.zhch:2181


  ha.zookeeper.session-timeout.ms
  1000



## 配置hdfs-site.xml
[yyl@node1 hadoop]$ vim hdfs-site.xml


  dfs.namenode.name.dir
  file:/home/yyl/program/hadoop-2.5.2/name


  dfs.datanode.data.dir
  file:/home/yyl/program/hadoop-2.5.2/data


  dfs.replication
  1


  dfs.webhdfs.enabled
  true


  dfs.permissions
  false


  dfs.permissions.enabled
  false


  dfs.nameservices
  mycluster,yourcluster


  dfs.ha.namenodes.mycluster
  nn1,nn2


  dfs.namenode.rpc-address.mycluster.nn1
  node1.zhch:9000


  dfs.namenode.rpc-address.mycluster.nn2
  node2.zhch:9000


  dfs.namenode.servicerpc-address.mycluster.nn1
  node1.zhch:53310


  dfs.namenode.servicerpc-address.mycluster.nn2
  node2.zhch:53310


  dfs.namenode.http-address.mycluster.nn1
  node1.zhch:50070


  dfs.namenode.http-address.mycluster.nn2
  node2.zhch:50070


  dfs.ha.namenodes.yourcluster
  nn1,nn2


  dfs.namenode.rpc-address.yourcluster.nn1
  node3.zhch:9000


  dfs.namenode.rpc-address.yourcluster.nn2
  node4.zhch:9000


  dfs.namenode.servicerpc-address.yourcluster.nn1
  node3.zhch:53310


  dfs.namenode.servicerpc-address.yourcluster.nn2
  node4.zhch:53310


  dfs.namenode.http-address.yourcluster.nn1
  node3.zhch:50070


  dfs.namenode.http-address.yourcluster.nn2
  node4.zhch:50070


  dfs.namenode.shared.edits.dir
  qjournal://node1.zhch:8485;node2.zhch:8485;node3.zhch:8485/mycluster


  dfs.client.failover.proxy.provider.mycluster
  org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider


  dfs.client.failover.proxy.provider.yourcluster
  org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider


  dfs.ha.fencing.methods
  sshfence


  dfs.ha.fencing.ssh.private-key-files
  /home/yyl/.ssh/id_rsa


  dfs.ha.fencing.ssh.connect-timeout
  30000


  dfs.journalnode.edits.dir
  /home/yyl/program/hadoop-2.5.2/journal


  dfs.ha.automatic-failover.enabled.mycluster
  true


  dfs.ha.automatic-failover.enabled.yourcluster
  true


  ha.failover-controller.cli-check.rpc-timeout.ms
  60000


  ipc.client.connect.timeout
  60000


  dfs.image.transfer.bandwidthPerSec
  4194304



## 配置yarn-site.xml
[yyl@node1 hadoop]$ vim yarn-site.xml


  yarn.nodemanager.aux-services
  mapreduce_shuffle


  yarn.nodemanager.aux-services.mapreduce.shuffle.class
  org.apache.hadoop.mapred.ShuffleHandler


  yarn.resourcemanager.connect.retry-interval.ms
  2000


  yarn.resourcemanager.ha.enabled
  true


  yarn.resourcemanager.ha.automatic-failover.enabled
  true


  yarn.resourcemanager.ha.automatic-failover.embedded
  true


  yarn.resourcemanager.cluster-id
  yarn-cluster


  yarn.resourcemanager.ha.rm-ids
  rm1,rm2


  yarn.resourcemanager.ha.id
  rm1


  yarn.resourcemanager.scheduler.class
  org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler


  yarn.resourcemanager.recovery.enabled
  true


  yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms
  5000


  yarn.resourcemanager.store.class
  org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore


  yarn.resourcemanager.zk-address
  node1.zhch:2181,node2.zhch:2181,node3.zhch:2181


  yarn.resourcemanager.zk.state-store.address
  node1.zhch:2181,node2.zhch:2181,node3.zhch:2181


  yarn.resourcemanager.address.rm1
  node1.zhch:23140


  yarn.resourcemanager.address.rm2
  node3.zhch:23140


  yarn.resourcemanager.scheduler.address.rm1
  node1.zhch:23130


  yarn.resourcemanager.scheduler.address.rm2
  node3.zhch:23130


  yarn.resourcemanager.admin.address.rm1
  node1.zhch:23141


  yarn.resourcemanager.admin.address.rm2
  node3.zhch:23141


  yarn.resourcemanager.resource-tracker.address.rm1
  node1.zhch:23125


  yarn.resourcemanager.resource-tracker.address.rm2
  node3.zhch:23125


  yarn.resourcemanager.webapp.address.rm1
  node1.zhch:23188


  yarn.resourcemanager.webapp.address.rm2
  node3.zhch:23188


  yarn.resourcemanager.webapp.https.address.rm1
  node1.zhch:23189


  yarn.resourcemanager.webapp.https.address.rm2
  node3.zhch:23189



## 分发到各个节点
[yyl@node1 hadoop]$ cd /home/yyl/program/
[yyl@node1 program]$ scp -rp hadoop-2.5.2 yyl@node2.zhch:/home/yyl/program/
[yyl@node1 program]$ scp -rp hadoop-2.5.2 yyl@node3.zhch:/home/yyl/program/
[yyl@node1 program]$ scp -rp hadoop-2.5.2 yyl@node4.zhch:/home/yyl/program/
[yyl@node1 program]$ scp -rp hadoop-2.5.2 yyl@node5.zhch:/home/yyl/program/
## 修改主namenode2(node3.zhch)和备namenode2(node4.zhch)的 hdfs-site.xml 配置文件中 dfs.namenode.shared.edits.dir 的值为 qjournal://node1.zhch:8485;node2.zhch:8485;node3.zhch:8485/yourcluster ,其余属性值不变。
## 修改备resource manager(node3.zhch)的 yarn-site.xml 配置文件中 yarn.resourcemanager.ha.id 的值为 rm2 ,其余属性值不变。

## 在各个节点上设置hadoop环境变量
[yyl@node1 ~]$ vim .bash_profile 
export HADOOP_PREFIX=/home/yyl/program/hadoop-2.5.2
export HADOOP_COMMON_HOME=$HADOOP_PREFIX
export HADOOP_HDFS_HOME=$HADOOP_PREFIX
export HADOOP_MAPRED_HOME=$HADOOP_PREFIX
export HADOOP_YARN_HOME=$HADOOP_PREFIX
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
export PATH=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin

二、格式化与启动 

## 启动Zookeeper集群
## 在主namenode1(node1.zhch)、主namenode2(node3.zhch)上执行命令: $HADOOP_HOME/bin/hdfs zkfc -formatZK
[yyl@node1 ~]$ hdfs zkfc -formatZK
[yyl@node3 ~]$ hdfs zkfc -formatZK
[yyl@node2 ~]$ zkCli.sh
[zk: localhost:2181(CONNECTED) 0] ls /
[hadoop-ha, zookeeper]
[zk: localhost:2181(CONNECTED) 1] ls /hadoop-ha
[mycluster, yourcluster]
## 在node1.zhch node2.zhch node3.zhch上启动journalnode:
[yyl@node1 ~]$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-journalnode-node1.zhch.out
[yyl@node1 ~]$ jps
1985 QuorumPeerMain
2222 Jps
2176 JournalNode
[yyl@node2 ~]$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-journalnode-node2.zhch.out
[yyl@node2 ~]$ jps
1783 Jps
1737 JournalNode
1638 QuorumPeerMain
[yyl@node3 ~]$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-journalnode-node3.zhch.out
[yyl@node3 ~]$ jps
1658 JournalNode
1495 QuorumPeerMain
1704 Jps

## 在主namenode1(node1.zhch)上格式化namenode
[yyl@node1 ~]$ hdfs namenode -format -clusterId c1
## 在主namenode1(node1.zhch)上启动namenode进程
[yyl@node1 ~]$ hadoop-daemon.sh start namenode
starting namenode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-namenode-node1.zhch.out
[yyl@node1 ~]$ jps
2286 NameNode
1985 QuorumPeerMain
2369 Jps
2176 JournalNode
## 在备namenode1(node2.zhch)上同步元数据
[yyl@node2 ~]$ hdfs namenode -bootstrapStandby
## 在备namenode1(node2.zhch)上启动namenode进程
[yyl@node2 ~]$ hadoop-daemon.sh start namenode
starting namenode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-namenode-node2.zhch.out
[yyl@node2 ~]$ jps
1923 Jps
1737 JournalNode
1638 QuorumPeerMain
1840 NameNode

## 在主namenode2(node3.zhch)上格式化namenode
[yyl@node3 ~]$ hdfs namenode -format -clusterId c1
## 在主namenode2(node3.zhch)上启动namenode进程
[yyl@node3 ~]$ hadoop-daemon.sh start namenode
starting namenode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-namenode-node3.zhch.out
[yyl@node3 ~]$ jps
1658 JournalNode
1495 QuorumPeerMain
1767 NameNode
1850 Jps
## 在备namenode2(node4.zhch)上同步元数据
[yyl@node4 ~]$ hdfs namenode -bootstrapStandby
## 在备namenode2(node4.zhch)上启动namenode进程
[yyl@node4 ~]$ hadoop-daemon.sh start namenode
starting namenode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-namenode-node4.zhch.out
[yyl@node4 ~]$ jps
1602 Jps
1519 NameNode

## 在所有的namenode上启动ZooKeeperFailoverController
[yyl@node1 ~]$ hadoop-daemon.sh start zkfc
starting zkfc, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-zkfc-node1.zhch.out
[yyl@node2 ~]$ hadoop-daemon.sh start zkfc
starting zkfc, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-zkfc-node2.zhch.out
[yyl@node3 ~]$ hadoop-daemon.sh start zkfc
starting zkfc, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-zkfc-node3.zhch.out
[yyl@node4 ~]$ hadoop-daemon.sh start zkfc
starting zkfc, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-zkfc-node4.zhch.out

## 启动DataNode
[yyl@node1 ~]$ hadoop-daemons.sh start datanode
node4.zhch: starting datanode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-datanode-node4.zhch.out
node5.zhch: starting datanode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-datanode-node5.zhch.out
node3.zhch: starting datanode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-datanode-node3.zhch.out
## 启动Yarn
[yyl@node1 ~]$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-resourcemanager-node1.zhch.out
node3.zhch: starting nodemanager, logging to /home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-nodemanager-node3.zhch.out
node4.zhch: starting nodemanager, logging to /home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-nodemanager-node4.zhch.out
node5.zhch: starting nodemanager, logging to /home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-nodemanager-node5.zhch.out
## 在备resource manager(node3.zhch)上启动resource manager
[yyl@node3 ~]$ yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-resourcemanager-node3.zhch.out
## 查看resource manager状态
[yyl@node1 ~]$ yarn rmadmin -getServiceState rm1
active
[yyl@node1 ~]$ yarn rmadmin -getServiceState rm2
standby

三、验证 

Hadoop2 namenode HA+联邦+Resource Manager HA实验分析

Hadoop2 namenode HA+联邦+Resource Manager HA实验分析

Hadoop2 namenode HA+联邦+Resource Manager HA实验分析

Hadoop2 namenode HA+联邦+Resource Manager HA实验分析

Hadoop2 namenode HA+联邦+Resource Manager HA实验分析

Hadoop2 namenode HA+联邦+Resource Manager HA实验分析

开两个终端,都连接到主resource manager,在终端A中运行jps命令查看resource manager进程ID,在终端B中运行MapReduce程序;然后再到终端A中kill掉resource manager进程;最后观察在主resource manager进程挂掉后,MapReduce任务是否还能正常执行完毕。
Hadoop2 namenode HA+联邦+Resource Manager HA实验分析

Hadoop2 namenode HA+联邦+Resource Manager HA实验分析

Hadoop2 namenode HA+联邦+Resource Manager HA实验分析

“Hadoop2 namenode HA+联邦+Resource Manager HA实验分析”的内容就介绍到这里了,感谢大家的阅读。如果想了解更多行业相关的知识可以关注创新互联网站,小编将为大家输出更多高质量的实用文章!


网站栏目:Hadoop2namenodeHA+联邦+ResourceManagerHA实验分析
地址分享:http://shouzuofang.com/article/jsejoh.html

其他资讯