精通Spark集群搭建与测试.docx

上传人:太** 文档编号:63262959 上传时间:2022-11-24 格式:DOCX 页数:49 大小:742.88KB
返回 下载 相关 举报
精通Spark集群搭建与测试.docx_第1页
第1页 / 共49页
精通Spark集群搭建与测试.docx_第2页
第2页 / 共49页
点击查看更多>>
资源描述

《精通Spark集群搭建与测试.docx》由会员分享,可在线阅读,更多相关《精通Spark集群搭建与测试.docx(49页珍藏版)》请在得力文库 - 分享文档赚钱的网站上搜索。

1、精通Spark集群搭建与测试电脑配置最好i5+内存最少8G.安装 VMware Workstation 软件推荐官方下载最新版,下载地址: s:/my.vmware /cn/web/vmware/details?downloadGroup=WKST-1210-WIN&productld=524&rPld=9763 VMware WorkstationFile Edit View VM Tabs Helpd 口口2.运行VMware Workstation,新建3台虚拟机,并安装Ubuntu操作系统Ubuntu 下载土也址: :/ 我用的 ubuntu-14.04.5-desktop-amd64

2、.iso需要配置虚拟机使之能够上网,在这里我们采用网络地址转换即NAT的方式,与宿主机共 享IP上网:注1:可以先装好一台机器,然后通过VMware的克隆功能生成另外两台。注2:安装完系统后,为了能从宿主机与虚拟机互相COPY文件,也为了能使虚拟机全屏 显示,推荐安装VMwareTools,方法如下:a. tar -xzvf VMwareTools-9.6.0-1294478.tar.gzcd vmware-tools-distrib/c.sudo 7vmware-install.pld.然后一路回车即可e.由于版本不同操作可能不同,百度Ubuntu安装Tools即可3.为了简化后续操作中的权

3、限问题,我们在这里配置为root账户登录系统,方法如下:a.终端进入root用户权限模式:命令sudo-sb.命令 gedit /etc/lightdm/lightdm.confc.最后一行添加:greeter-show-manual-login=trueallow-guest=falsehadoop.tmp.dir/usr/local/hadoop/hadoop-2.6.4/tmpA base for other temporary directorieshadoop.native.IibtrueShould native hadoop libraries, if present, be u

4、sed.h.修改9皿11(55呢3(011,以下是最小配置,更详细的信息可以参考官网: :/hadoopapacheorg/docs/stable/hadoopproiectdist/hadoophdfs/hdfsdefa ult.xmlgedit hdfs-site.xml:dfs.replication2dfs.namenode.name.dir/usr/local/hadoop/hadoop-2.6.4/dfs/namedfs.datanode.data.dir/usr/local/hadoop/hadoop-2.6.4/dfs/data注:这里指定的 dfs.namenode.name

5、.dir 与 dfs.datanode.data.dir 假设不存在的话, 后续start-dfs时会报错:i.修改gedit mapre&site.xml,以下是最小配置,更详细的信息可以参考官网: : hadoop. apache. orq/docs/stable/hadoop-proiectdist/hadoop-hdfs/hdf s-default.xml注:MRv1的Hadoop没有使用yam作为资源管理器,其配置如下:gedit mapred-site.xml : (without yarn) mapred.job.trackermaster:9001 *MRv2的hadoop使用

6、yarn作为资源管理器,其配置如下:vim mapred-site.xml : (with yarn)mapreduce.framework.nameyarnj.修改yarnsite.xml,以下是最小配置,更详细的信息可以参考官网: :/hadoodapache.org/docs/stable/hadoop-yarr/hadoop-yanvcommor/yarivd efault, xmlgedit yarn-site.xml:yarn.resourcemanager.hostnamemasteryarn.nodemanager.aux-servicesmapreduce_shuffle注:

7、Yarn是Hadoop推出整个分布式(大数据)集群的资源管理器,负责资源 的管理和分配,基于Yarn我们可以在同一个大数据集群上同时运行多个计算框架,例如 Spark、MapReduce、Stormo12.启动并验证hadoop集群:a. 格式化 hdfs 文件系统:hadoop namenode -format/hdfs namenode -formatrootmaster:/usr/local/hadoop/hadoop-2.6.0/bin# hadoop namenode -format DEPRECATED: Use of this script to execute hdfs com

8、mand is deprecated. Instead use the hdfs command for it.16/03/03 14:38:15 INFO namenode.NameNode: STARTUP_MSG:STARTUP_MSG: Starting NameNodeSTARTUP_MSG:host = master/192.168.85.130STARTUP_MSG:args = -formatSTARTUP_MSG:version = 2.6.Q该命令会启动,格式化,然后关闭namenode。实际上格式化后,在namenode上会生成以下文件:rootmaster:/usr/l

9、ocal/hadoop/hadoop-2.6.0/dfs/name/current# Is fsimage_O0O00O0000000000000 seen_txidfsimage_O0OOOOOOOOO00OOO0OO.md5 VERSIONrootmaster:/usr/local/hadoop/hadoop-2.6.O/dfs/name/current#其中VERSION文件的内容如下:rootmaster:/usr/local/hadoop/hadoop-2.6.0/dfs/name# Is current rootmaster:/usr/local/hadoop/hadoop-2.6

10、.0/dfs/name# cd current/ rootmaster:/usr/local/hadoop/hadoop-2.6.0/dfs/name/current# Is fsimage_00O0000O0O0000O0000 seen_txid fstmage_O000000O00000O00000 . md5 VERSIONnamespaceID=1103891 clusterID=CID-69035837-rootmaster:/usr/local/hadoop/hadoop-2.6.0/dfs/name/current# more VERSION #Thu Mar 03 16:54

11、:31 CST 2016.029a-45a3-b0b3-ld662751eb43cTime=0storageType=NAME NO blockpoolID=BP-996551254-192.168 85.130-1456995271763layoutVersion=-60rootmaster:/usr/local/hadoop/hadoop-2.6.0/dfs/name/current# ,该命令不会在datanode的dfs.datanode.data.dir对应的目录下生成任何文件:rootgworkerl:/usr/local/hadoop/hadoop-2.6.O/dfs/data#

12、 Is rootgworkerl:/usr/local/hadoop/hadoop-2.6.6/dfs/data# |有关该命令的细节请参考官方文档: : hadoop. aDache.org/docs/stable/hadoop-Droject-dist/hadooD-hdfs/HDFSCo mmands.html#namenodeb.启动 hdfs: start-dfs.shrootmaster:/usr/local/hadoop/hadoop-2.6.0/dfs# start-dfs.sh16/03/03 16:57:43 WARN util.NativeCodeLoader: Unab

13、le to load nattve-hadoop libra ry for your platform. using butltin-java classes where applicableStarting namenodes on mastermaster: starting namenode, logging to /usr/local/hadoop/hadoop-26.0/logs/hadoop root-namenode-master.outworkerl: starting datanode, logging to /usr/local/hadoop/hadoop-2.6.0/lo

14、gs/hadoo p-root-datanode-worker1.outworker2: starting datanode, logging to /usr/local/hadoop/hadoop-2.6.0/logs/hadoo p-root-datanode-worker2.outworker3: starting datanode, logging to /usr/local/hadoop/hadoop-2.6.0/logs/hadoo p- root-datanode-worker3.outStarting secondary namenodes mastermaster: star

15、ting secondarynamenode, logging to /usr/local/hadoop/hadoop-2.6.0/10 gs/hadoop-root-secondarynamenode-master.out16/03/03 16:57:58 WARN util.NativeCodeLoader: Unable to load nattve-hadoop libra ry for your platform. using butltin-java classes where applicable rootmaster:/usr/local/hadoop/hadoop-2.6.0

16、/dfs# |使用jps验证HDFS是否启动成功:rootmaster:/usr/local/hadoop/hadoop-2e6.O/bin# jps 3600 NameNode3926 Jps3815 SecondaryNameNode_通过webui检查HDFS是否启动成功 : master:50070 r.r-lh.H v YW a master:50070/dfshealth.html#tab-overviewHadoop Overview Datanodes Snapshot Startup Progress UtilitiesOverview,maste匚900(y (active

17、/LStarted:Thu Mar 03 16:57:44 CST 2016Version:2.6.0, re3496499ecb8d220fba99dc5ed4c99c8f9e33bblCompiled:2014-ll-13T21:10Z by jenkins from (detached from e349649)Cluster ID:CID-69035837-029a-45a3-b0b3-ld662751eb43Block Pool ID:BP-996551254-192.168.85.130-1456995271763Namenode information x6 master:500

18、70/dfshealth.html#tab-overview CI Q SearchDFS Used:72 KBNon DFS Used:18.82 GBDFS Remaining:33.96 GBDFS Used%:0%DFS Remaining%:64.35%Block Pool Used:72 KBBlock Pool Used%:0%DataNodes usages% (Min/Median/Max/stdDev):0.00% / 0.00% / 0.00% / 0.00%Live NodesDead Nodes0 (Decommissioned: 0)3 (Decommissione

19、d: 0)Decommissioning NodesNumber of Under-Replicated BlocksNumber of Blocks Pending Deletion注1:实际上第一次启动hdfs后,在datanode dfs.datanode.data.dir对应的目录下会生成current目录,该目录下的BP文件与namenode上dfs.namenode.name.dir对应的目录下的current子目录的VERSION文件中的 blockpoollD字段的值一致;在该目录下也会生成VERSION文件,该VERSION文 件中的 clusterlD 和 namenode

20、 的 dfs.namenode.name.dir 对应的目录下的 current 子目录的VERSION文件中的clusterlD 一致: O root0)worker3: /usr/local/hadoop/hadoop-2.6.0/dFs/data/currentrootworker3:/usr/local/hadoop/hadoop-2.6e0/dfs/data# Iscurrent in_uselockrootworker3:/usr/local/hadoop/hadoop-2.6.0/dfs/data# cd current/rootworkefftjsr/*local/hadoop

21、/hadoop-2.6.0/dfs/data/current# IsBP-996551254-192.168.85.130-1456995271763 VERSIONrootworker3:/usr/local/hadoop/hadoop-2.6.0/dfs/data/current# more VERSION#Thu Mar 03 16:57:50 CST 2016storageID=DS-773e81f4-39f9-4a20-9f36-b48952d06848clusterID=CID-69035837-029a-45a3-b0b3-ld662751eb43cTime=0datanodeU

22、uid=db5rt1b7-6592-46ff-af4e-c99a0ee75b80storageType=DATA_NODElayoutVersion=-56rootworker3:/usr/local/hadoop/hadoop-2.6.0/dfs/data/current#实际上在后续执行了 hdfs namenode -format后,namenode的VERSION文件会 改变:000 root)masten /usr/local/hadoop/hadoop-2.6.0/dfs/name/currentnamespaceID=2001999531clusterID=CID-d216d55

23、2-e79e-4d9c-8c6d-f9b412205090cTime=0storageType=NAME_NODEblockpoolID=BP-1484997606192.168.85.130-1457136293776layoutVersion=-60而dananode的BP和VERSION文件都不会改变:rootujorker2:/usr/local/hadoop/hadoop-2.6.O/dfs/data/current# Is551254-192.1四二85130-1456935271763 VERSIONrootuorker2:/usr/BatteadooiiZboop-Z,6, O

24、/dfs/data/current# more VERSION#Fri Mar 04 19:03:10 EST 2ulb一_ _storageID=DS-a9f0dfd3-cdc0-4810-ab49-49579blee3b2clusterID=CID-69035837-029a-45a3-b0b3-ld662751eb43cTime=0datanodeUuid=f005a5B*e346fe-94fa-8061c8ac0fb0storageType=DATA_NODElayoutVersion=-56rootiuorker2:/usr/local/hadoop/hadoop-2.6.O/dfs

25、/data/current#再次start-dfs.sh时,namenode可以成功启动,但在datanode上,因为version 文件与namenode的不一致,datanode不能成功启动并成功注册到namenode! 所以:每次执行hdfs namenode -format前,必须清空datanode的data文件夹! (namenode的name文件夹不需要清空,namenode和datanode的tmp文件夹也 不需要空。)注2:注:有的朋友喜欢使用start-all.sh,其实质是执行了 start-dfs.sh和start-yarn.sh, 如下列图可见,在提示中也可见,推荐分

26、开使用start-dfs.sh和start-yarn.sh而不是 直接使用start-alLsh:# Start all hadoop daemons. Run this on master node.echo * T N s scr Is Depr(u :dbtn= dtrname M$BASH_SOURCE-$Obtn= cd Sbin,; pwdDEFAULT_LIBEXEC_DIR=H$btn,7./ItbexecHADOOP_LIBEXEJD1R=$HADOOP_LIBEXEJD1R:-$DEFAULT_L1BEXEC_DIR $HADOOP_LIBEXEC_DIR/hadoop-c

27、onfig.sh# start hdfs daemons If hdfs is presentif -f 1 HADOOP HDFS HOME H /sbin/start-dfs.sh ; thenSfHADOOP_HDFS_HOMEjVsbin/start-dfs.sh -config $HADOOP_CONF_DIRfl-# start yarn daemons if yarn is presentif -f HADOOP YARN HOME/sbin/start-yarn.sh ; thenStHADOOP-YARN-HOMEjVsbtn/start-yarn.sh -config $H

28、ADOOP_CONF_DIR代一一.38,1Botc.启动 yarn: start-yarn.shrootmaster:/usr/local/hadoop/hadoop-2.6.0/dfs/name/current# start-yarn.sh starting yarn daemonsstarting resourcemanager, logging to /usr/local/hadoop/hadoop-2.6.0/logs/yarn-roo t-resourcemanager-master.outworker3: starting nodemanager, logging to /usr

29、/local/hadoop/hadoop-2.6.0/logs/yar n-root-nodemanager-worker3.outworker2: starting nodemanager, logging to /usr/local/hadoop/hadoop-2.6.0/logs/yar n-root- nodemanager-worker2.outworked: starting nodemanager, logging to /usr/local/hadoop/hadoop-2.6.0/logs/yar n-root- nodemanager-workerl.outrootmaste

30、r:/usr/local/hadoop/hadoop-2.6.0/dfs/name/current# |使用jps验证yarn是否启动成功:rootmaster:/usr/local/hadoop/hadoop-2.6.0/dfs/name/current# jps9480 ResourceManagerr8908 NameNode9116 SecondaryNameNode9743 JpsrootQworkerl : -S jps通过webui检查yarn是否启动成功: :master:8088/ : workerl:8042/Applicationsa master:8088/cluste

31、r ? Q Searcha master:8088/cluster 6.0/dfs/name/current# jps9878 JobHistoryServer9480 ResourceManager_8908 NameNode9116 SecondaryNameNode9948 JpsootQnaste:/us八ocal/hadoop/hadoop-2.6.e/dfs/namR/cuent# ,通过webui检查JobHistory Server是否启动成功: :/master:19888也割:/亩可享J,JobHistory ApplicationAbout JobsRetired Job

32、s ToolsShow 20 entriesSearch:Submit Time 人Start Time 人 7Finish TimeJob ID人 YName人 VUser人 VQueue人 VState人 YMaps Total人 YMaps Completed 人 yNo data available in tableSubmitStartFinishJotNameUserQueueStateMapsMaps CompShowing 0 to 0 of 0 entriesJobHistory G | | Q Search华 净 master: 19888/jobhistorye.验证ha

33、doop集群创立文件夹:hdfs dfs -mkdir -p /data/wordcounthdfs dfs -mkdir -p /output上传文件:hdfs dfs -put /usr/local/hadoop/hadoop-2.6.0/etc/hadoop/*.xml /data/wordcount查看上传文件是否成功:hdfs dfs -Is /data/wordcount保存d. 为root账号 设置密码:sudo passwd roote.重新启动系统后,即可用root账号登录:reboot注1:如果系统提示vim没有安装的话,可以通过apt-get install vim安装。

34、注2:切换为root账户登录后,如果遇到以下问题:O Error found when loading /rooty.profile:stdin: is not a ttyAs a result the session will not be configured correctly.Ydu should fix the problem as soon as feasible.OK.方法一:将/root/, profile 文件中的 mesg n替换成 tty -s & mesg n重启方法二:将非root账户目录中的.profile复制到/root/:例如:cp例ome/非root账户的名字

35、/.profile /root/.重启在各个节点修改节点名称,并配置ip地址和hostname的对应关系:rootmaster:# hdfs dfs -Is /data/wordcount16/03/05 08:36:28 WARN util.NativeCodeLoader: Unable to load native-hadoop libra ry for your platform. using builtin-java classes where applicableFound 10 items -rw-r-r-2 root(ity-scheduler.xml -rw-r-r-2 ro

36、otKite.xml (-rw-r-r-2 rootroot root rootp-policy.xml. site.xml .-rw-r-r-2rs-site.xml Lrw-r.r. 2 cis.xml -rw-r-r-2ite.xml -rw-r-r-2d-site.xml |-rw-r-r-2ldsttevan.xmlsupergroup supergroup supergroup supergroup supergroup supergrouproot supergroup root supergroup root supergroup4436 2016-03-05 08:35129

37、3 2016-03-05 08:359683 2016-03-05 08:351289 2016-03-05 08:35620352355118632016-03-052016-03-052016-03-052016-03-0508:3508:3508:3508:35863 2016-03-05 08:35/data/wordcount/capac/data/wordcount/core-/data/wordcount/hadod/data/wordcount/hdfs-/data/wordcount/ f/data/wordcount/kms-a/data/wordcount/kms-g/d

38、ata/wordcount/mapre/data/wordcount/mapre尝试执行hadoop自带的wordcount程序:hadoop jar$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.O.jar wordcount /data/wordcount /output/wordcount下列图可见,执行成功:rootmaster:# hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-e xamples-2.6.0.jar wo

39、rdcount /data/wordcount /output/wordcount16/03/05 08:43:11 WARN util.NativeCodeLoader: Unable to load native-hadoop libra ry for your platform. using builtin-java classes where applicable16/03/05 08:43:16 INFOclient.RMProxy: Connecting to ResourceManager at master/192.168.85.130:8032 16/03/05 08:43:

40、19 16/03/05 08:43:19 16/03/05 08:43:20 57138395806_0001 16/03/05 08:43:24 57138395806_0001 16/03/05 08:43:24INFOINFOINFOINFOINFOinput.FilelnputFormat: Total input paths to process : 10mapreduce.JobSubmitter: number of splits:10mapreduce.JobSubmitter: Submitting tokens for job: job_14implYarnclientImpl: Submitted appli

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 应用文书 > 解决方案

本站为文档C TO C交易模式,本站只提供存储空间、用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。本站仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知得利文库网,我们立即给予删除!客服QQ:136780468 微信:18945177775 电话:18904686070

工信部备案号:黑ICP备15003705号-8 |  经营许可证:黑B2-20190332号 |   黑公网安备:91230400333293403D

© 2020-2023 www.deliwenku.com 得利文库. All Rights Reserved 黑龙江转换宝科技有限公司 

黑龙江省互联网违法和不良信息举报
举报电话:0468-3380021 邮箱:hgswwxb@163.com