单机模式,Canal能够正常执行Kafka Transactional,但是多台HA的时候报错如下:
Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker.
Canal 版本号:1.1.3 配置如下:
canal.id = 1 canal.ip = canal.port = 11111 canal.metrics.pull.port = 11112 canal.zkServers =** canal.zookeeper.flush.period = 1000 canal.withoutNetty = false canal.serverMode = kafka canal.file.data.dir = ${canal.conf.dir} canal.file.flush.period = 1000 canal.instance.memory.buffer.size = 16384 canal.instance.memory.buffer.memunit = 1024 canal.instance.memory.batch.mode = MEMSIZE canal.instance.memory.rawEntry = true
canal.instance.detecting.enable = false canal.instance.detecting.sql = select 1 canal.instance.detecting.interval.time = 3 canal.instance.detecting.retry.threshold = 3 canal.instance.detecting.heartbeatHaEnable = false
canal.instance.transaction.size = 1024 canal.instance.fallbackIntervalInSeconds = 60
canal.instance.network.receiveBufferSize = 16384 canal.instance.network.sendBufferSize = 16384 canal.instance.network.soTimeout = 30
canal.instance.filter.druid.ddl = true canal.instance.filter.query.dcl = false canal.instance.filter.query.dml = true canal.instance.filter.query.ddl = false canal.instance.filter.table.error = true canal.instance.filter.rows = false canal.instance.filter.transaction.entry = true
canal.instance.binlog.format = ROW,STATEMENT,MIXED canal.instance.binlog.image = FULL,MINIMAL,NOBLOB
canal.instance.get.ddl.isolation = false
canal.instance.parser.parallel = true canal.instance.parser.parallelThreadSize = 16 canal.instance.parser.parallelBufferSize = 256
canal.instance.tsdb.enable = true canal.instance.tsdb.dir = ${canal.file.data.dir:../conf}/${canal.instance.destination:} canal.instance.tsdb.url = jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL; canal.instance.tsdb.dbUsername = canal canal.instance.tsdb.dbPassword = canal canal.instance.tsdb.snapshot.interval = 24 canal.instance.tsdb.snapshot.expire = 360
canal.aliyun.accessKey = canal.aliyun.secretKey =
canal.destinations = canal.conf.dir = ../conf canal.auto.scan = true canal.auto.scan.interval = 5
canal.instance.tsdb.spring.xml = classpath:spring/tsdb/h2-tsdb.xml
canal.instance.global.mode = spring canal.instance.global.lazy = false canal.instance.global.spring.xml = classpath:spring/default-instance.xml
canal.mq.servers =host1:port1,host2:port2 canal.mq.retries = 2 canal.mq.batchSize = 32768 canal.mq.maxRequestSize = 2097152 canal.mq.lingerMs = 200 canal.mq.bufferMemory = 33554432 canal.mq.canalBatchSize = 50 canal.mq.canalGetTimeout = 100 canal.mq.flatMessage = true canal.mq.compressionType = none canal.mq.acks = all canal.mq.transaction = true
Instance 配置如下:
canal.instance.gtidon=false
canal.instance.master.address=host:port canal.instance.master.journal.name= canal.instance.master.position= canal.instance.master.timestamp= canal.instance.master.gtid=
canal.instance.rds.accesskey= canal.instance.rds.secretkey= canal.instance.rds.instanceId=
canal.instance.tsdb.enable=true canal.instance.dbUsername=usr canal.instance.dbPassword=pwd canal.instance.connectionCharset = UTF-8 canal.instance.enableDruid=false
canal.instance.filter.regex=db.tb canal.instance.filter.black.regex=
canal.mq.topic=topic canal.mq.partition=0 canal.mq.partitionHash=.\..:id
麻烦请告知是否哪里有遗漏的
原提问者GitHub用户uniorder
1.1.3 kafka transaction不支持多个instance,考虑升级1.1.4
原回答者GitHub用户agapple
版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。