kafka部署及命令

本文涉及的产品
服务治理 MSE Sentinel/OpenSergo,Agent数量 不受限
注册配置 MSE Nacos/ZooKeeper,118元/月
云原生网关 MSE Higress,422元/月
简介:


下载地址:
Zookeeper:
http://mirror.bit.edu.cn/apache/zookeeper/current/

Scala:
http://www.scala-lang.org/download/2.11.8.html

Kafka:
http://kafka.apache.org/downloads


一.Zookeeper部署
1.下载解压zookeeper-3.4.6.tar.gz
[root@hadoop001 software]# tar -xvf zookeeper-3.4.6.tar.gz
[root@hadoop001 software]# mv zookeeper-3.4.6 zookeeper
[root@hadoop001 software]#
[root@hadoop001 software]# chown -R root:root zookeeper
2.修改配置
[root@hadoop001 software]# cd zookeeper/conf
[root@hadoop001 conf]# ll
total 12
-rw-rw-r--. 1 root root  535 Feb 20  2014 configuration.xsl
-rw-rw-r--. 1 root root 2161 Feb 20  2014 log4j.properties
-rw-rw-r--. 1 root root  922 Feb 20  2014 zoo_sample.cfg
[root@hadoop001 conf]# cp zoo_sample.cfg zoo.cfg
[root@hadoop001 conf]# vi zoo.cfg 
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.

dataDir=/opt/software/zookeeper/data

# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

server.1=hadoop001:2888:3888
server.2=hadoop002:2888:3888
server.3=hadoop003:2888:3888

~
"zoo.cfg" 36L, 1028C written
[root@hadoop001 conf]# cd ../
[root@hadoop001 zookeeper]#  mkdir data
[root@hadoop001 zookeeper]# touch data/myid
[root@hadoop001 zookeeper]# echo 1 > data/myid
[root@hadoop001 zookeeper]#

3.hadoop002/003,也修改配置,就如下不同
[root@hadoop001 software]# scp -r  zookeeper 192.168.137.141:/opt/software/
[root@hadoop001 software]# scp -r  zookeeper 192.168.137.142:/opt/software/

[root@hadoop002 zookeeper]# echo 2 > data/myid
[root@hadoop003 zookeeper]# echo 3 > data/myid

###切记不可echo 3>data/myid,将>前后空格保留,否则无法将 3 写入myid文件

4.启动Zookeeper集群
[root@hadoop001 bin]# ./zkServer.sh start
[root@hadoop002 bin]# ./zkServer.sh start
[root@hadoop003 bin]# ./zkServer.sh start

5.查看Zookeeper状态
[root@hadoop001 bin]# ./zkServer.sh status
JMX enabled by default
Using config: /opt/software/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@hadoop002 bin]#  ./zkServer.sh status
JMX enabled by default
Using config: /opt/software/zookeeper/bin/../conf/zoo.cfg
Mode: leader
[root@hadoop002 bin]# 
[root@hadoop003 bin]#  ./zkServer.sh status
JMX enabled by default
Using config: /opt/software/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@hadoop003 bin]# 


6.进入客户端
[root@hadoop001 bin]# ./zkCli.sh 
[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper, yarn-leader-election, hadoop-ha, rmstore]
[zk: localhost:2181(CONNECTED) 1] 
[zk: localhost:2181(CONNECTED) 1] help
ZooKeeper -server host:port cmd args
stat path [watch]
set path data [version]
ls path [watch]
delquota [-n|-b] path
ls2 path [watch]
setAcl path acl
setquota -n|-b val path
history 
redo cmdno
printwatches on|off
delete path [version]
sync path
listquota path
rmr path
get path [watch]
create [-s] [-e] path data acl
addauth scheme auth
quit 
getAcl path
close 
connect host:port
[zk: localhost:2181(CONNECTED) 2] 

二.Kafka部署
1.解压并配置Scala
[root@hadoop001 software]# tar -xzvf scala-2.11.8.tgz
[root@hadoop001 software]# chown -R root:root scala-2.11.8
[root@hadoop001 software]# ln -s scala-2.11.8 scala


#环境变量
[root@hadoop001 software]# vi /etc/profile
export SCALA_HOME=/opt/software/scala
export PATH=$SCALA_HOME/bin:$PATH

[root@hadoop001 software]# source /etc/profile
[root@hadoop001 software]# scala
Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_45).
Type in expressions for evaluation. Or try :help.

 
2.下载基于Scala 2.11的kafka版本为0.10.0.1 
[root@hadoop001 software]# tar -xzvf kafka_2.11-0.10.0.1.tgz
[root@hadoop001 software]# ln -s kafka_2.11-0.10.0.1 kafka
[root@hadoop001 software]# 

3.创建logs目录和修改server.properties
[root@hadoop001 software]# cd kafka
[root@hadoop001 kafka]# mkdir logs
[root@hadoop001 kafka]# cd config/
[root@hadoop001 config]# vi server.properties
broker.id=1
port=9092
host.name=192.168.137.141
log.dirs=/opt/software/kafka/logs
zookeeper.connect=192.168.137.141:2181,192.168.137.142:2181,192.168.137.143:2181/kafka

4.环境变量
[root@hadoop001 config]# vi /etc/profile
export KAFKA_HOME=/opt/software/kafka
export PATH=$KAFKA_HOME/bin:$PATH
[root@hadoop001 config]# source /etc/profile

5.另外两台机器如上操作
 
6.启动/停止
[root@sht-sgmhadoopdn-01 kafka]# nohup kafka-server-start.sh config/server.properties &
[root@sht-sgmhadoopdn-02 kafka]# nohup kafka-server-start.sh config/server.properties &
[root@sht-sgmhadoopdn-03 kafka]# nohup kafka-server-start.sh config/server.properties &
###停止
bin/kafka-server-stop.sh

---------------------------------------------------------------------------------------------------------------------------------------------
7.模拟实验1
创建test topic
bin/kafka-topics.sh --create \
--zookeeper 192.168.137.141:2181,192.168.137.142:2181,192.168.137.143:2181/kafka \
--replication-factor 3 --partitions 3 --topic test


在一个终端,启动Producer,并向我们上面创建的名称为my-replicated-topic5的Topic中生产消息,执行如下脚本:
bin/kafka-console-producer.sh \
--broker-list 192.168.137.141:9092,192.168.137.142:9092,192.168.137.143:9092 --topic test

在另一个终端,启动Consumer,并订阅我们上面创建的名称为my-replicated-topic5的Topic中生产的消息,执行如下脚本:
bin/kafka-console-consumer.sh \
--zookeeper 192.168.137.141:2181,192.168.137.142:2181,192.168.137.143:2181/kafka \
--from-beginning --topic test
可以在Producer终端上输入字符串消息行,就可以在Consumer终端上看到消费者消费的消息内容。





------------------------------ 
相关文章
|
3月前
|
消息中间件 JSON 安全
Kafka常用命令归纳
本文档详细介绍了Kafka 2.2及以上版本中Topic的操作命令,包括创建、查看、修改及删除Topic,以及动态调整主题参数和限速。此外,还涵盖了数据生产和消费的相关命令与性能测试方法,并对内部Topic(如`__consumer_offsets`和`__transaction_state`)的操作进行了说明。最后,提供了常见错误处理方案及Kafka推荐配置,帮助用户更好地管理和优化Kafka集群。
|
3月前
|
消息中间件 Kafka 测试技术
Kafka常用命令大全及kafka-console-consumer.sh及参数说明
该文章汇总了Kafka常用命令,包括集群管理、Topic操作、生产者与消费者的命令行工具使用方法等,适用于Kafka的日常运维和开发需求。
500 2
|
4月前
|
消息中间件 监控 Java
【一键解锁!】Kafka Manager 部署与测试终极指南 —— 从菜鸟到高手的必经之路!
【8月更文挑战第9天】随着大数据技术的发展,Apache Kafka 成为核心组件,用于处理实时数据流。Kafka Manager 提供了简洁的 Web 界面来管理和监控 Kafka 集群。本文介绍部署步骤及示例代码,助您快速上手。首先确认已安装 Java 和 Kafka。
617 4
|
5月前
|
消息中间件 Java Kafka
kafka Linux环境搭建安装及命令创建队列生产消费消息
kafka Linux环境搭建安装及命令创建队列生产消费消息
114 4
|
4月前
|
消息中间件 域名解析 网络协议
【Azure 应用服务】部署Kafka Trigger Function到Azure Function服务中,解决自定义域名解析难题
【Azure 应用服务】部署Kafka Trigger Function到Azure Function服务中,解决自定义域名解析难题
|
4月前
|
消息中间件 Kafka Apache
部署安装kafka集群
部署安装kafka集群
|
7月前
|
消息中间件 Kafka
Kafka【部署 03】Zookeeper与Kafka自动部署脚本
【4月更文挑战第11天】Kafka【部署 03】Zookeeper与Kafka自动部署脚本
103 8
|
7月前
|
消息中间件 Kafka Docker
docker部署kafka
docker部署kafka
198 1
|
7月前
|
消息中间件 Kafka Docker
【消息中心】docker部署kafka
【消息中心】docker部署kafka
98 0
|
7月前
|
消息中间件 算法 Kafka
docker-compose部署kafka
docker-compose部署kafka

热门文章

最新文章