开发者社区> 问答> 正文

HA模式下Kafka事务失败

单机模式,Canal能够正常执行Kafka Transactional,但是多台HA的时候报错如下:

Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker.

Canal 版本号:1.1.3 配置如下:

canal.id = 1 canal.ip = canal.port = 11111 canal.metrics.pull.port = 11112 canal.zkServers =** canal.zookeeper.flush.period = 1000 canal.withoutNetty = false canal.serverMode = kafka canal.file.data.dir = ${canal.conf.dir} canal.file.flush.period = 1000 canal.instance.memory.buffer.size = 16384 canal.instance.memory.buffer.memunit = 1024 canal.instance.memory.batch.mode = MEMSIZE canal.instance.memory.rawEntry = true

canal.instance.detecting.enable = false canal.instance.detecting.sql = select 1 canal.instance.detecting.interval.time = 3 canal.instance.detecting.retry.threshold = 3 canal.instance.detecting.heartbeatHaEnable = false

canal.instance.transaction.size = 1024 canal.instance.fallbackIntervalInSeconds = 60

canal.instance.network.receiveBufferSize = 16384 canal.instance.network.sendBufferSize = 16384 canal.instance.network.soTimeout = 30

canal.instance.filter.druid.ddl = true canal.instance.filter.query.dcl = false canal.instance.filter.query.dml = true canal.instance.filter.query.ddl = false canal.instance.filter.table.error = true canal.instance.filter.rows = false canal.instance.filter.transaction.entry = true

canal.instance.binlog.format = ROW,STATEMENT,MIXED canal.instance.binlog.image = FULL,MINIMAL,NOBLOB

canal.instance.get.ddl.isolation = false

canal.instance.parser.parallel = true canal.instance.parser.parallelThreadSize = 16 canal.instance.parser.parallelBufferSize = 256

canal.instance.tsdb.enable = true canal.instance.tsdb.dir = ${canal.file.data.dir:../conf}/${canal.instance.destination:} canal.instance.tsdb.url = jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL; canal.instance.tsdb.dbUsername = canal canal.instance.tsdb.dbPassword = canal canal.instance.tsdb.snapshot.interval = 24 canal.instance.tsdb.snapshot.expire = 360

canal.aliyun.accessKey = canal.aliyun.secretKey =

canal.destinations = canal.conf.dir = ../conf canal.auto.scan = true canal.auto.scan.interval = 5

canal.instance.tsdb.spring.xml = classpath:spring/tsdb/h2-tsdb.xml

canal.instance.global.mode = spring canal.instance.global.lazy = false canal.instance.global.spring.xml = classpath:spring/default-instance.xml

canal.mq.servers =host1:port1,host2:port2 canal.mq.retries = 2 canal.mq.batchSize = 32768 canal.mq.maxRequestSize = 2097152 canal.mq.lingerMs = 200 canal.mq.bufferMemory = 33554432 canal.mq.canalBatchSize = 50 canal.mq.canalGetTimeout = 100 canal.mq.flatMessage = true canal.mq.compressionType = none canal.mq.acks = all canal.mq.transaction = true

Instance 配置如下:

canal.instance.gtidon=false

canal.instance.master.address=host:port canal.instance.master.journal.name= canal.instance.master.position= canal.instance.master.timestamp= canal.instance.master.gtid=

canal.instance.rds.accesskey= canal.instance.rds.secretkey= canal.instance.rds.instanceId=

canal.instance.tsdb.enable=true canal.instance.dbUsername=usr canal.instance.dbPassword=pwd canal.instance.connectionCharset = UTF-8 canal.instance.enableDruid=false

canal.instance.filter.regex=db.tb canal.instance.filter.black.regex=

canal.mq.topic=topic canal.mq.partition=0 canal.mq.partitionHash=.\..:id

麻烦请告知是否哪里有遗漏的

原提问者GitHub用户uniorder

展开
收起
数据大拿 2023-05-04 11:50:33 119 0
1 条回答
写回答
取消 提交回答
  • 1.1.3 kafka transaction不支持多个instance,考虑升级1.1.4

    原回答者GitHub用户agapple

    2023-05-05 10:21:34
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
Java Spring Boot开发实战系列课程【第16讲】:Spring Boot 2.0 实战Apache Kafka百万级高并发消息中间件与原理解析 立即下载
MaxCompute技术公开课第四季 之 如何将Kafka数据同步至MaxCompute 立即下载
消息队列kafka介绍 立即下载