开发者社区> 问答> 正文

将Canal从v1.1.4升级到v1.1.5时出现了报错 org.apache.kafka.comm

升级后kafka报超时问题

2021-08-17 14:50:40.657 [MQ-Parallel-Sender-5] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=producer-2] Exception occurred during message send: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. 2021-08-17 14:50:40.665 [pool-12-thread-1] ERROR c.a.o.canal.connector.kafka.producer.CanalKafkaProducer - java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. java.lang.RuntimeException: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. at com.alibaba.otter.canal.connector.kafka.producer.CanalKafkaProducer.send(CanalKafkaProducer.java:184) ~[na:na] at com.alibaba.otter.canal.server.CanalMQStarter.worker(CanalMQStarter.java:181) [canal.server-1.1.5.jar:na] at com.alibaba.otter.canal.server.CanalMQStarter.access$100(CanalMQStarter.java:25) [canal.server-1.1.5.jar:na] at com.alibaba.otter.canal.server.CanalMQStarter$CanalMQRunnable.run(CanalMQStarter.java:223) [canal.server-1.1.5.jar:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_181] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_181] Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:1150) ~[na:na] at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:846) ~[na:na] at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:784) ~[na:na] at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:671) ~[na:na] at com.alibaba.otter.canal.connector.kafka.producer.CanalKafkaProducer.produce(CanalKafkaProducer.java:268) ~[na:na] at com.alibaba.otter.canal.connector.kafka.producer.CanalKafkaProducer.send(CanalKafkaProducer.java:261) ~[na:na] at com.alibaba.otter.canal.connector.kafka.producer.CanalKafkaProducer.lambda$send$0(CanalKafkaProducer.java:156) ~[na:na] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_181] ... 3 common frames omitted Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

canal使用web方式部署。

v1.1.4 MQ 配置如下:

################################################## ######### MQ ############# ################################################## #这里因为我们选择的是kafka,所以填kafka集群地址,多个地址用逗号隔开 canal.mq.servers = 10.234.6.220:9092 canal.mq.retries = 0 canal.mq.batchSize = 16384 canal.mq.maxRequestSize = 1048576 canal.mq.lingerMs = 100 canal.mq.bufferMemory = 33554432 canal.mq.canalBatchSize = 50 canal.mq.canalGetTimeout = 100 canal.mq.flatMessage = true canal.mq.compressionType = none canal.mq.acks = all #canal.mq.properties. = canal.mq.producerGroup = test

Set this value to "cloud", if you want open message trace feature in aliyun.

canal.mq.accessChannel = local

aliyun mq namespace

#canal.mq.namespace =

################################################## ######### Kafka Kerberos Info ############# ################################################## canal.mq.kafka.kerberos.enable = false canal.mq.kafka.kerberos.krb5FilePath = "../conf/kerberos/krb5.conf" canal.mq.kafka.kerberos.jaasFilePath = "../conf/kerberos/jaas.conf"

v1.1.5 MQ 配置如下:

################################################## ######### MQ Properties ############# ##################################################

aliyun ak/sk , support rds/mq

canal.aliyun.accessKey = canal.aliyun.secretKey = canal.aliyun.uid=

canal.mq.flatMessage = true canal.mq.canalBatchSize = 50 canal.mq.canalGetTimeout = 100 #canal.mq.properties. = canal.mq.producerGroup = test

Set this value to "cloud", if you want open message trace feature in aliyun.

canal.mq.accessChannel = local

canal.mq.database.hash = true canal.mq.send.thread.size = 30 canal.mq.build.thread.size = 8

################################################## ######### Kafka ############# ##################################################

kafka的bootstrap.servers

kafka.bootstrap.servers = 10.234.6.220:9092

kafka的ProducerConfig.ACKS_CONFIG

kafka.acks = all kafka.compression.type = none kafka.batch.size = 16384

kafka的ProducerConfig.LINGER_MS_CONFIG , 如果是flatMessage格式建议将该值调大, 如: 200

kafka.linger.ms = 100

kafka的ProducerConfig.MAX_REQUEST_SIZE_CONFIG

kafka.max.request.size = 1048576

kafka的ProducerConfig.BUFFER_MEMORY_CONFIG

kafka.buffer.memory = 33554432

限制客户端在单个连接上能够发送的未响应请求的个数。

设置此值是1表示kafka broker在响应请求之前client不能再向同一个broker发送请求。

注意:设置此参数是为了避免消息乱序

kafka.max.in.flight.requests.per.connection = 1

发送失败重试次数

kafka.retries = 0

kafka.kerberos.enable = false kafka.kerberos.krb5.file = "../conf/kerberos/krb5.conf" kafka.kerberos.jaas.file = "../conf/kerberos/jaas.conf"

麻烦帮看看是什么原因导致了kafka的超时呢?

我尝试删除了主题,超时问题就可以解决了,但是我生产环境主题很多,不大可能这么做?希望能找到问题的原因所在,谢谢。

原提问者GitHub用户bjddd192

展开
收起
山海行 2023-04-27 16:11:42 134 0
1 条回答
写回答
取消 提交回答
  • 进一步查看,原来是动态主题的名称发生了变化: 1.4 是 db_wms1.bl_ch_check 1.5 是 db_wms1_bl_ch_check 但是规则并没有调整,都是: canal.mq.dynamicTopic=.\..

    原回答者GitHub用户bjddd192

    2023-04-27 22:09:17
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
Java Spring Boot开发实战系列课程【第16讲】:Spring Boot 2.0 实战Apache Kafka百万级高并发消息中间件与原理解析 立即下载
MaxCompute技术公开课第四季 之 如何将Kafka数据同步至MaxCompute 立即下载
消息队列kafka介绍 立即下载

相关镜像