[Flume][Kafka]Flume 与 Kakfa结合例子(Kakfa 作为flume 的sink 输出到 Kafka topic)

本文涉及的产品
注册配置 MSE Nacos/ZooKeeper,118元/月
服务治理 MSE Sentinel/OpenSergo,Agent数量 不受限
云原生网关 MSE Higress,422元/月
简介:


Flume 与 Kakfa结合例子(Kakfa 作为flume 的sink 输出到 Kafka topic)


进行准备工作:

$sudo mkdir -p /flume/web_spooldir
$sudo chmod a+w -R /flume

 

编辑 flume的配置文件:

复制代码

$ cat /home/tester/flafka/spooldir_kafka.conf

# Name the components on this agent
agent1.sources = weblogsrc
agent1.sinks = kafka-sink
agent1.channels = memchannel

# Configure the source
agent1.sources.weblogsrc.type = spooldir
agent1.sources.weblogsrc.spoolDir = /flume/web_spooldir
agent1.sources.weblogsrc.channels = memchannel

# Configure the sink
agent1.sinks.kafka-sink.type = org.apache.flume.sink.kafka.KafkaSink
agent1.sinks.kafka-sink.topic = weblogs
agent1.sinks.kafka-sink.brokerList = localhost:9092
agent1.sinks.kafka-sink.batchSize = 20
agent1.sinks.kafka-sink.channel = memchannel

# Use a channel which buffers events in memory
agent1.channels.memchannel.type = memory
agent1.channels.memchannel.capacity = 100000
agent1.channels.memchannel.transactionCapacity = 1000
$

复制代码

 

运行 Flume-ng:

$ flume-ng agent --conf /etc/flume-ng/conf \
> --conf-file spooldir_kafka.conf \
> --name agent1 -Dflume.root.logger=INFO,console

输出类似如下:

复制代码

Info: Sourcing environment configuration script /etc/flume-ng/conf/flume-env.sh
Info: Including Hadoop libraries found via (/usr/bin/hadoop) for HDFS access
Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12.jar from classpath
Info: Including HBASE libraries found via (/usr/bin/hbase) for HBASE access
Info: Excluding /usr/lib/hbase/bin/../lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/hbase/bin/../lib/slf4j-log4j12.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12.jar from classpath
Info: Excluding /usr/lib/zookeeper/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar from classpath
Info: Excluding /usr/lib/zookeeper/lib/slf4j-log4j12.jar from classpath
Info: Including Hive libraries found via () for Hive access
+ exec /usr/java/default/bin/java -Xmx500m -Dflume.root.logger=INFO,console -cp '/etc/flume-ng/conf:/usr/lib/flume- 
ng/lib/*:/etc/hadoop/conf:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar

...

-Djava.library.path=:/usr/lib/hadoop/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hbase/bin/../lib/native/Linux-amd64-64 
org.apache.flume.node.Application --conf-file spooldir_kafka.conf --name agent1
2017-10-23 01:15:11,209 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start 
(PollingPropertiesFileConfigurationProvider.java:61)] Configuration provider starting
2017-10-23 01:15:11,223 (conf-file-poller-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider 
$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:133)] Reloading configuration file:spooldir_kafka.conf
2017-10-23 01:15:11,256 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty 
(FlumeConfiguration.java:1017)] Processing:kafka-sink

...

2017-10-23 01:15:11,933 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start 
(MonitoredCounterGroup.java:96)] Component type: SOURCE, name: weblogsrc started
2017-10-23 01:15:13,003 (lifecycleSupervisor-1-1) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Verifying properties
2017-10-23 01:15:13,271 (lifecycleSupervisor-1-1) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property 
key.serializer.class is overridden to kafka.serializer.StringEncoder
2017-10-23 01:15:13,271 (lifecycleSupervisor-1-1) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property 
metadata.broker.list is overridden to localhost:9092
2017-10-23 01:15:13,277 (lifecycleSupervisor-1-1) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property 
request.required.acks is overridden to 1
2017-10-23 01:15:13,277 (lifecycleSupervisor-1-1) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property serializer.class 
is overridden to kafka.serializer.DefaultEncoder
2017-10-23 01:15:13,718 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register 
(MonitoredCounterGroup.java:120)] Monitored counter group for type: SINK, name: kafka-sink: Successfully registered new MBean.
2017-10-23 01:15:13,719 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start 
(MonitoredCounterGroup.java:96)] Component type: SINK, name: kafka-sink started
...

2017-10-23 01:15:13,720 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents 
(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.
2017-10-23 01:15:13,720 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile 
(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /flume/web_spooldir/2014-01-13.log to 
/flume/web_spooldir/2014-01-13.log.COMPLETED

..

2017-10-23 01:16:11,441 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents 
(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.
2017-10-23 01:16:11,451 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile 
(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /flume/web_spooldir/2014-01-24.log to 
/flume/web_spooldir/2014-01-24.log.COMPLETED
2017-10-23 01:16:11,818 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents 
(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.
2017-10-23 01:16:11,819 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile 
(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /flume/web_spooldir/2014-02-15.log to 
/flume/web_spooldir/2014-02-15.log.COMPLETED

复制代码

 

执行kafka consumer 程序:

$kafka-console-consumer --zookeeper localhost:2181 --topic weblogs

 

在另外的一个终端窗口,向/flume/web_spooldir 目录输入 web log:

cp -rf /home/tester/weblogs /tmp/tmp_weblogs
mv /tmp/tmp_weblogs/* /flume/web_spooldir
rm -rf /tmp/tmp_weblogs

 

Flume-ng 窗口显示的内容(正在传输log文件到Kafka topic weblogs):

复制代码

2017-10-23 01:36:28,436 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents 
(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.
2017-10-23 01:36:28,449 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile 
(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /flume/web_spooldir/2013-09-22.log to 
/flume/web_spooldir/2013-09-22.log.COMPLETED
2017-10-23 01:36:28,971 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents 
(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.
...

2017-10-23 01:37:39,011 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile 
(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /flume/web_spooldir/2014-02-19.log to 
/flume/web_spooldir/2014-02-19.log.COMPLETED
2017-10-23 01:37:39,386 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents 
(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.
2017-10-23 01:37:39,386 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile 
(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /flume/web_spooldir/2014-03-09.log to 
/flume/web_spooldir/2014-03-09.log.COMPLETED

复制代码

 


Consumer 窗口,输出 所有 web 文件的内容(接收 topic weblogs,获得所有web log 内容):

复制代码

...

213.125.211.10 - 66543 [09/Mar/2014:00:00:14 +0100] "GET /KBDOC-00131.html HTTP/1.0" 200 9807 "http://www.tester.com" "tester 
test 001"
213.125.211.10 - 66543 [09/Mar/2014:00:00:14 +0100] "GET /theme.css HTTP/1.0" 200 6448 "http://www.tester.com" "tester test 002"

复制代码

 

 

$kafka-console-consumer --zookeeper localhost:2181 --topic weblogs







本文转自健哥的数据花园博客园博客,原文链接:http://www.cnblogs.com/gaojian/p/7718832.html,如需转载请自行联系原作者

目录
相关文章
|
7月前
|
消息中间件 存储 监控
Flume+Kafka整合案例实现
Flume+Kafka整合案例实现
131 1
|
7月前
|
消息中间件 存储 负载均衡
深入了解Kafka中Topic的神奇之处
深入了解Kafka中Topic的神奇之处
528 0
|
2月前
|
分布式计算 Java Hadoop
Hadoop-18 Flume HelloWorld 第一个Flume尝试!编写conf实现Source+Channel+Sink 控制台查看收集到的数据 流式收集
Hadoop-18 Flume HelloWorld 第一个Flume尝试!编写conf实现Source+Channel+Sink 控制台查看收集到的数据 流式收集
32 1
|
2月前
|
消息中间件 关系型数据库 MySQL
大数据-117 - Flink DataStream Sink 案例:写出到MySQL、写出到Kafka
大数据-117 - Flink DataStream Sink 案例:写出到MySQL、写出到Kafka
158 0
|
2月前
|
消息中间件 存储 分布式计算
大数据-53 Kafka 基本架构核心概念 Producer Consumer Broker Topic Partition Offset 基础概念了解
大数据-53 Kafka 基本架构核心概念 Producer Consumer Broker Topic Partition Offset 基础概念了解
70 4
|
2月前
|
存储 数据采集 分布式计算
Hadoop-17 Flume 介绍与环境配置 实机云服务器测试 分布式日志信息收集 海量数据 实时采集引擎 Source Channel Sink 串行复制负载均衡
Hadoop-17 Flume 介绍与环境配置 实机云服务器测试 分布式日志信息收集 海量数据 实时采集引擎 Source Channel Sink 串行复制负载均衡
48 1
|
3月前
|
消息中间件 Kafka Apache
kafka: invalid configuration (That topic/partition is already being consumed)
kafka: invalid configuration (That topic/partition is already being consumed)
|
2月前
|
消息中间件 NoSQL Kafka
大数据-116 - Flink DataStream Sink 原理、概念、常见Sink类型 配置与使用 附带案例1:消费Kafka写到Redis
大数据-116 - Flink DataStream Sink 原理、概念、常见Sink类型 配置与使用 附带案例1:消费Kafka写到Redis
156 0
|
4月前
|
数据采集 存储 Apache
Flume核心组件大揭秘:Agent、Source、Channel、Sink,一文掌握数据采集精髓!
【8月更文挑战第24天】Flume是Apache旗下的一款顶级服务工具,专为大规模日志数据的收集、聚合与传输而设计。其架构基于几个核心组件:Agent、Source、Channel及Sink。Agent作为基础执行单元,整合Source(数据采集)、Channel(数据暂存)与Sink(数据传输)。本文通过实例深入剖析各组件功能与配置,包括Avro、Exec及Spooling Directory等多种Source类型,Memory与File Channel方案以及HDFS、Avro和Logger等Sink选项,旨在提供全面的Flume应用指南。
174 1
|
5月前
|
消息中间件 NoSQL Redis
实时计算 Flink版产品使用问题之配置了最大连续失败数不为1,在Kafka的精准一次sink中,如果ck失败了,这批数据是否会丢失
实时计算Flink版作为一种强大的流处理和批处理统一的计算框架,广泛应用于各种需要实时数据处理和分析的场景。实时计算Flink版通常结合SQL接口、DataStream API、以及与上下游数据源和存储系统的丰富连接器,提供了一套全面的解决方案,以应对各种实时计算需求。其低延迟、高吞吐、容错性强的特点,使其成为众多企业和组织实时数据处理首选的技术平台。以下是实时计算Flink版的一些典型使用合集。