前言
操作前需要准备:
- 虚拟机镜像:CentOS-6.5-x86_64-bin-DVD1.iso
链接:https://pan.baidu.com/s/1O9a-6Sn7riGWG3mVQssTGg 提取码:rud1 - jdk:jdk-8u144-linux-x64.tar.gz
链接:https://pan.baidu.com/s/1TdaCDaT_qriDMjbYFyphPw 提取码:qulj - hadoop:hadoop-2.7.2.tar.gz
链接:https://pan.baidu.com/s/1Wt0mAUHKJDSYTUM5-u6CYw 提取码:oofe - 或者官网:https://archive.apache.org/dist/hadoop/common/hadoop-2.7.2/
上述的如果百度云下载的慢的话,可以去各大开源论坛或者官网下载
博主使用的工具为Xshell,非常方便的一个软件,感兴趣的话可以动动自己的小手,去官网下载
一、前期环境配置
关闭防火墙
# 查看防火墙状态 [root@localhost dr]# firewall-cmd --state running # 关闭防火墙 [root@localhost dr]# systemctl stop firewalld.service [root@localhost dr]# firewall-cmd --state not running # 禁止 firewall 开机启动 [root@localhost dr]# systemctl disable firewalld.service Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
修改配置hosts文件,并测试是否能ping通
# 查看本机地址 192.168.23.128 [root@localhost dr]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.23.128 netmask 255.255.255.0 broadcast 192.168.23.255 inet6 fe80::5895:a1c7:da57:e4ad prefixlen 64 scopeid 0x20<link> ether 00:0c:29:a1:55:a1 txqueuelen 1000 (Ethernet) RX packets 213928 bytes 299951288 (286.0 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 22291 bytes 2345515 (2.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 # 修改配置hosts文件 [root@localhost dr]# vi /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.23.128 master ~ # 测试是否设置成功 [root@localhost dr]# ping master PING master (192.168.23.128) 56(84) bytes of data. 64 bytes from master (192.168.23.128): icmp_seq=1 ttl=64 time=0.020 ms 64 bytes from master (192.168.23.128): icmp_seq=2 ttl=64 time=0.113 ms 64 bytes from master (192.168.23.128): icmp_seq=3 ttl=64 time=0.023 ms 64 bytes from master (192.168.23.128): icmp_seq=4 ttl=64 time=0.122 ms
设置免密登录
[root@localhost dr]# ssh-keygen # 一直回车 Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:8CSgRg7wOr5NWlwL1A17rW3CyA9X7RkpFbvP2MAHL4A root@master The key's randomart image is: +---[RSA 2048]----+ |+ . .. o. | | = ...+ o o o | | =. ooE.= * | | o. . +=+ = * | |o . = =So B o | |... o = o O | | . + . . . + | | * | | o . | +----[SHA256]-----+ # 给 master 配置免密 [root@localhost .ssh]# ssh-copy-id master /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system. (if you think this is a mistake, you may want to use -f option)
二、安装Hadoop
1.解压Hadoop
- 先把Hadoop的压缩文件放到 本地目录 /home/dr/Datafile/ 这是我的目录,下文如是
- 在 /usr/local/ 目录下创建 hadoop 文件夹
[root@localhost /]# mkdir /usr/local/hadoop [root@localhost /]# cd /usr/local/ [root@localhost local]# ll total 0 drwxr-xr-x. 2 root root 6 Nov 5 2016 bin drwxr-xr-x. 2 root root 6 Nov 5 2016 etc drwxr-xr-x. 2 root root 6 Nov 5 2016 games drwxr-xr-x. 2 root root 6 Mar 29 01:32 hadoop drwxr-xr-x. 2 root root 6 Nov 5 2016 include drwxr-xr-x. 3 root root 26 Mar 28 04:43 java drwxr-xr-x. 2 root root 6 Nov 5 2016 lib drwxr-xr-x. 2 root root 6 Nov 5 2016 lib64 drwxr-xr-x. 2 root root 6 Nov 5 2016 libexec drwxr-xr-x. 2 root root 6 Nov 5 2016 sbin drwxr-xr-x. 5 root root 49 Mar 28 01:26 share drwxr-xr-x. 2 root root 6 Nov 5 2016 src
- 把hadoop压缩包解压到 /hadoop文件下
[root@localhost local]# tar -zxvf /home/dr/Datafile/hadoop-2.7.7.tar.gz -C /usr/local/hadoop/ hadoop-2.7.7/ hadoop-2.7.7/bin/ hadoop-2.7.7/bin/hadoop.cmd hadoop-2.7.7/bin/rcc hadoop-2.7.7/bin/test-container-executor hadoop-2.7.7/bin/mapred hadoop-2.7.7/bin/yarn hadoop-2.7.7/bin/yarn.cmd hadoop-2.7.7/bin/hadoop
-C 才可以成功解压到指定目录
2.将Hadoop添加到环境变量
- 获取Hadoop安装路径
[root@localhost hadoop-2.7.7]# pwd /usr/local/hadoop/hadoop-2.7.7
- 编辑 /etc/profile 文件,把环境配置加在最后
export HADOOP_HOME=/usr/local/hadoop/hadoop-2.7.7 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
- 让修改后的文件生效,并测试Hadoop是否安装成功
[root@localhost hadoop-2.7.7]# source /etc/profile [root@localhost hadoop-2.7.7]# hadoop version Hadoop 2.7.7 Subversion Unknown -r c1aad84bd27cd79c3d1a7dd58202a8c3ee1ed3ac Compiled by stevel on 2018-07-18T22:47Z Compiled with protoc 2.5.0 From source with checksum 792e15d20b12c74bd6f19a1fb886490 This command was run using /usr/local/hadoop/hadoop-2.7.7/share/hadoop/common/hadoop-common-2.7.7.jar
3.Hadoop目录结构
[root@localhost hadoop-2.7.7]# ll total 112 drwxr-xr-x. 2 dr ftp 194 Jul 18 2018 bin # 存放对Hadoop相关服务(HDFS,YARN)进行操作的脚本 drwxr-xr-x. 3 dr ftp 20 Jul 18 2018 etc # Hadoop的配置文件目录,存放Hadoop的配置文件 drwxr-xr-x. 2 dr ftp 106 Jul 18 2018 include drwxr-xr-x. 3 dr ftp 20 Jul 18 2018 lib # 存放Hadoop的本地库(对数据进行压缩解压缩功能) drwxr-xr-x. 2 dr ftp 239 Jul 18 2018 libexec -rw-r--r--. 1 dr ftp 86424 Jul 18 2018 LICENSE.txt -rw-r--r--. 1 dr ftp 14978 Jul 18 2018 NOTICE.txt -rw-r--r--. 1 dr ftp 1366 Jul 18 2018 README.txt drwxr-xr-x. 2 dr ftp 4096 Jul 18 2018 sbin drwxr-xr-x. 4 dr ftp 31 Jul 18 2018 share # 存放Hadoop的依赖jar包、文档、和官方案例
三、Hadoop伪分布式配置(重点)
1.说明配置文件的一些注意事项
在Hadoop安装目录/etc/下创建一个 hadoop 文件夹,如果没有特别说明,所以下面所有配置文件都要进这个 /usr/local/hadoop/hadoop-2.7.7/etc/hadoop
2.配置 hadoop-env.sh(在hadoop-2.7.7/etc/hadoop/ 下)
[root@localhost hadoop]# vi hadoop-env.sh
为了方便查找我们要改的信息,在命令行模式,输入 :se nu 显示行号
然后找到25行,跟33行改成所对应的jdk目录,跟hadoop目录(注意这里是hadoo目录后面还有/ect/hadoop)
# 修改前 24 # The java implementation to use. 25 export JAVA_HOME=${JAVA_HOME} 26 27 # The jsvc implementation to use. Jsvc is required to run secure datanodes 28 # that bind to privileged ports to provide authentication of data transfer 29 # protocol. Jsvc is not required if SASL is configured for authentication of 30 # data transfer protocol using non-privileged ports. 31 #export JSVC_HOME=${JSVC_HOME} 32 33 export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"} #修改后 24 # The java implementation to use. 25 export JAVA_HOME=/usr/local/java/jdk1.8.0_171 26 27 # The jsvc implementation to use. Jsvc is required to run secure datanodes 28 # that bind to privileged ports to provide authentication of data transfer 29 # protocol. Jsvc is not required if SASL is configured for authentication of 30 # data transfer protocol using non-privileged ports. 31 #export JSVC_HOME=${JSVC_HOME} 32 33 export HADOOP_CONF_DIR=/usr/local/hadoop/hadoop-2.7.7/etc/hadoop
保存并退出 ESC :wq! ,使配置文件立即生效
3.配置四个文件 core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml
首先在hadoop目录下: /usr/local/hadoop/hadoop-2.7.7 ,创建一个新的 tmp 文件夹
编辑 core-site.xml
[root@localhost hadoop]# vi core-site.xml <configuration> <property> <name>fs.default.name</name> <value>hdfs://master:9000</value> # master 是你自己的主机名 </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop/hadoop-2.7.7/tmp</value> # 改成你自己的文件地址 </property> </configuration>
编辑 hdfs-site.xml
vim hdfs-site.xml # 照抄 <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <!--设置hdfs的操作权限,false表示任何用户都可以在hdfs上操作文件--> <property> <name>dfs.permissions</name> <value>false</value> </property> </configuration>
编辑 mapred-site.xml
这个文件初始时是没有的,但是有一个模板文件,mapred-site.xml.template
所以需要拷贝一份,并重命名为mapred-site.xml,使用命令:
cp ./mapred-site.xml.template ./mapred-site.xml
然后编辑打开 mapred-site.xml
vim mapred-site.xml # 直接复制 <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
编辑 yarn-site.xml
vim yarn-site.xml # 直接复制, 修改 master <configuration> <property> <!--指定yarn的老大 resoucemanager的地址--> <name>yarn.resourcemanager.hostname</name>: <value>master</value> </property> <property> <!--NodeManager获取数据的方式--> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
编辑slaves文件
vim slaves # 修改 localhost 为 master
四、查看及启动
用命令 jps 查看在线的工作节点
[root@localhost hadoop]# jps 50451 Jps # 我们可以看到没有启动hadoop,是没有任何节点在线的
格式化namenode
在第一次安装hadoop的时候,需要对namenode进行格式化,以后请不要随便在去用这个命令格式化namenode
[root@localhost hadoop]# hadoop namenode -format # 如果成功会有 有 successfully 提示 21/03/29 02:43:46 INFO common.Storage: Storage directory /usr/local/hadoop/hadoop-2.7.7/tmp/dfs/name has been successfully formatted. 21/03/29 02:43:46 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/local/hadoop/hadoop-2.7.7/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression 21/03/29 02:43:46 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/hadoop/hadoop-2.7.7/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 321 bytes saved in 0 seconds. 21/03/29 02:43:46 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 21/03/29 02:43:46 INFO util.ExitUtil: Exiting with status 0 21/03/29 02:43:46 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master/192.168.23.128 ************************************************************/
启动hadoop
因为我们已经配置hadoop的环境变量了,所以不要在sbin 目录下启动,在任何目录下直接用这个命令都能启动hadoop
直接使用这个命令
start-dfs.sh [root@localhost hadoop]# jps 50788 DataNode 51093 Jps 50649 NameNode 50970 SecondaryNameNode # 出现上面这些说明hadoop伪分布式配置好了。
打开浏览器输入 http://master:50070 查看能成功打开 namenode状态信息
五、实例测试
既然已经装好了hadoop了,那么我们来试试官方给出的wordcount实例操作一下,感受hadoop的强大吧。
1.创建一个本地文件1.txt:
目录我选择为:/home/dr/test.txt
输入命令:
vim test.txt # 输入下面内容 i like hadoop and i like study i like java i like jdk i like java jdk hadoop # 退出保存
2.上传到hdfs文件系统上
我的test.txt是在/home/dr/下
首先先在hdfs的根目录下创建一个input目录,使用命令:
hdfs dfs -mkdir /input
然后上传到hdfs上:(确保你当前的路径在/home/dr下)
hdfs dfs -put ./test.txt /input
然后查看是否成功上传:
hdfs dfs -ls /input
3.使用命令让hadoop工作
在上面的启动hadoop中,我们只启动hdfs,没有启动yarn,因此我们先使用命令启动yarn:
start-yarn.sh
然后直接使用命令:
# 因hadoopb版本不一致,因此解压缩后 share 文件夹内的 jar 包名版本会有些不一致 hadoop jar ./hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.7.jar wordcount /input /output
我们看到hadoop已经跑起来了,最后的successful代表工作成功!
4.查看输出结果
hadoop工作成功后会自己在指定路径/output 生成两个文件
我们看一下
hdfs dfs -ls /output
第一个文件/output/_SUCCESS:是表示工作成功的文件,没有具体的文本,这个我们忽略
第二个文件/output/part-r-00000:才是我们真正的输出文件
查看一下结果:
最后得出了每个单词出现的次数