开发者社区> 问答> 正文

日志无报错,但Kafka中没有数据

kafka

canal.serverMode = kafka canal.mq.servers = 127.0.0.1:9092 canal.mq.retries = 0 canal.mq.batchSize = 128 canal.mq.maxRequestSize = 1048576 canal.mq.lingerMs = 50 canal.mq.bufferMemory = 33554432 canal.mq.canalBatchSize = 50 canal.mq.canalGetTimeout = 100 canal.mq.flatMessage = true canal.mq.compressionType = none canal.mq.acks = all

use transaction for kafka flatMessage batch produce

canal.mq.transaction = true #canal.mq.properties. =

我把kafka端口故意改错,日志也未有任何关于kafka的报错 ./stop.sh的时候能看到日志报关闭producter

instance配置

canal.instance.defaultDatabaseName =rupiah_loan canal.instance.connectionCharset = UTF-8

enable druid Decrypt database password

canal.instance.enableDruid=false #canal.instance.pwdPublicKey=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ==

table regex

canal.instance.filter.regex=user_loan_plan,user_loan,user_info,user_coupon

table black regex

canal.instance.filter.black.regex=

mq config

canal.mq.topic=canal_local_uangme_loan canal.mq.dynamicTopic=.\.. canal.mq.partition=0 #canal.mq.partitionsNum=3 #canal.mq.partitionHash=test.table:id^name,.\..

启动日志

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=96m; support was removed in 8.0 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0 Java HotSpot(TM) 64-Bit Server VM warning: UseCMSCompactAtFullCollection is deprecated and will likely be removed in a future release. 2019-06-06 18:04:34.889 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## set default uncaught exception handler 2019-06-06 18:04:34.950 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## load canal configurations 2019-06-06 18:04:34.960 [main] INFO c.a.o.c.d.monitor.remote.RemoteConfigLoaderFactory - ## load local canal configurations 2019-06-06 18:04:34.993 [main] INFO com.alibaba.otter.canal.deployer.CanalStater - ## start the canal server. 2019-06-06 18:04:35.040 [main] INFO com.alibaba.otter.canal.deployer.CanalController - ## start the canal server[10.60.204.28:11111]

==> ./example/example.log <== 2019-06-06 18:04:35.578 [main] INFO c.a.o.c.i.spring.support.PropertyPlaceholderConfigurer - Loading properties file from class path resource [canal.properties] 2019-06-06 18:04:35.583 [main] INFO c.a.o.c.i.spring.support.PropertyPlaceholderConfigurer - Loading properties file from class path resource [example/instance.properties]

==> ./canal/canal.log <== 2019-06-06 18:04:35.858 [main] WARN o.s.beans.GenericTypeAwarePropertyDescriptor - Invalid JavaBean property 'connectionCharset' being accessed! Ambiguous write methods found next to actually used [public void com.alibaba.otter.canal.parse.inbound.mysql.AbstractMysqlEventParser.setConnectionCharset(java.lang.String)]: [public void com.alibaba.otter.canal.parse.inbound.mysql.AbstractMysqlEventParser.setConnectionCharset(java.nio.charset.Charset)]

==> ./example/example.log <== 2019-06-06 18:04:35.858 [main] WARN o.s.beans.GenericTypeAwarePropertyDescriptor - Invalid JavaBean property 'connectionCharset' being accessed! Ambiguous write methods found next to actually used [public void com.alibaba.otter.canal.parse.inbound.mysql.AbstractMysqlEventParser.setConnectionCharset(java.lang.String)]: [public void com.alibaba.otter.canal.parse.inbound.mysql.AbstractMysqlEventParser.setConnectionCharset(java.nio.charset.Charset)] 2019-06-06 18:04:35.940 [main] INFO c.a.o.c.i.spring.support.PropertyPlaceholderConfigurer - Loading properties file from class path resource [canal.properties] 2019-06-06 18:04:35.942 [main] INFO c.a.o.c.i.spring.support.PropertyPlaceholderConfigurer - Loading properties file from class path resource [example/instance.properties]

==> ./canal/canal.log <== 2019-06-06 18:04:36.283 [main] ERROR com.alibaba.druid.pool.DruidDataSource - testWhileIdle is true, validationQuery not set

==> ./example/example.log <== 2019-06-06 18:04:36.283 [main] ERROR com.alibaba.druid.pool.DruidDataSource - testWhileIdle is true, validationQuery not set 2019-06-06 18:04:36.631 [main] INFO c.a.otter.canal.instance.spring.CanalInstanceWithSpring - start CannalInstance for 1-example

==> ./canal/canal.log <== 2019-06-06 18:04:36.641 [main] WARN c.a.o.canal.parse.inbound.mysql.dbsync.LogEventConvert - --> init table filter : ^user_loan_plan$|^user_coupon$|^user_loan$|^user_info$

==> ./example/example.log <== 2019-06-06 18:04:36.641 [main] WARN c.a.o.canal.parse.inbound.mysql.dbsync.LogEventConvert - --> init table filter : ^user_loan_plan$|^user_coupon$|^user_loan$|^user_info$

==> ./canal/canal.log <== 2019-06-06 18:04:36.642 [main] WARN c.a.o.canal.parse.inbound.mysql.dbsync.LogEventConvert - --> init table black filter :

==> ./example/example.log <== 2019-06-06 18:04:36.642 [main] WARN c.a.o.canal.parse.inbound.mysql.dbsync.LogEventConvert - --> init table black filter : 2019-06-06 18:04:36.761 [main] INFO c.a.otter.canal.instance.core.AbstractCanalInstance - start successful....

==> ./canal/canal.log <== 2019-06-06 18:04:36.912 [main] ERROR com.alibaba.druid.pool.DruidDataSource - testWhileIdle is true, validationQuery not set 2019-06-06 18:04:36.939 [main] WARN c.a.o.canal.parse.inbound.mysql.dbsync.LogEventConvert - --> init table filter : ^...$ 2019-06-06 18:04:36.939 [main] WARN c.a.o.canal.parse.inbound.mysql.dbsync.LogEventConvert - --> init table black filter : 2019-06-06 18:04:36.945 [main] INFO com.alibaba.otter.canal.deployer.CanalStater - ## the canal server is running now ...... 2019-06-06 18:04:36.947 [destination = metrics , address = null , EventParser] ERROR c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - parse events has an error com.alibaba.otter.canal.parse.exception.CanalParseException: illegal connection is null 2019-06-06 18:04:36.964 [canal-instance-scan-0] INFO c.a.o.canal.deployer.monitor.SpringInstanceConfigMonitor - auto notify stop metrics successful. 2019-06-06 18:04:37.674 [destination = example , address = /149.129.216.117:3306 , EventParser] WARN c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - ---> begin to find start position, it will be long time for reset or first position

==> ./example/example.log <== 2019-06-06 18:04:37.674 [destination = example , address = /149.129.216.117:3306 , EventParser] WARN c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - ---> begin to find start position, it will be long time for reset or first position

==> ./canal/canal.log <== 2019-06-06 18:04:37.675 [destination = example , address = /149.129.216.117:3306 , EventParser] WARN c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - prepare to find start position just last position {"identity":{"slaveId":-1,"sourceAddress":{"address":"149.129.216.117","port":3306}},"postion":{"gtid":"","included":false,"journalName":"mysql-bin.000001","position":19078581,"serverId":1,"timestamp":1559815408000}}

==> ./example/example.log <== 2019-06-06 18:04:37.675 [destination = example , address = /149.129.216.117:3306 , EventParser] WARN c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - prepare to find start position just last position {"identity":{"slaveId":-1,"sourceAddress":{"address":"149.129.216.117","port":3306}},"postion":{"gtid":"","included":false,"journalName":"mysql-bin.000001","position":19078581,"serverId":1,"timestamp":1559815408000}}

==> ./canal/canal.log <== 2019-06-06 18:04:37.734 [destination = example , address = /149.129.216.117:3306 , EventParser] WARN c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - ---> find start position successfully, EntryPosition[included=false,journalName=mysql-bin.000001,position=19078581,serverId=1,gtid=,timestamp=1559815408000] cost : 49ms , the next step is binlog dump

==> ./example/example.log <== 2019-06-06 18:04:37.734 [destination = example , address = /149.129.216.117:3306 , EventParser] WARN c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - ---> find start position successfully, EntryPosition[included=false,journalName=mysql-bin.000001,position=19078581,serverId=1,gtid=,timestamp=1559815408000] cost : 49ms , the next step is binlog dump

这又是啥问题?关于ACK配置这里没有找到相关文档

2019-06-06 22:24:09.336 [main] ERROR com.alibaba.otter.canal.server.CanalMQStarter - ## Something goes wrong when starting up the canal MQ workers: org.apache.kafka.common.KafkaException: Failed to construct kafka producer at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:456) ~[kafka-clients-1.1.1.jar:na] at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:303) ~[kafka-clients-1.1.1.jar:na] at com.alibaba.otter.canal.kafka.CanalKafkaProducer.init(CanalKafkaProducer.java:67) ~[canal.server-1.1.3.jar:na] at com.alibaba.otter.canal.server.CanalMQStarter.start(CanalMQStarter.java:50) ~[canal.server-1.1.3.jar:na] at com.alibaba.otter.canal.deployer.CanalStater.start(CanalStater.java:113) [canal.deployer-1.1.3.jar:na] at com.alibaba.otter.canal.deployer.CanalLauncher.main(CanalLauncher.java:57) [canal.deployer-1.1.3.jar:na] Caused by: org.apache.kafka.common.config.ConfigException: Must set acks to all in order to use the idempotent producer. Otherwise we cannot guarantee idempotence. at org.apache.kafka.clients.producer.KafkaProducer.configureAcks(KafkaProducer.java:533) ~[kafka-clients-1.1.1.jar:na] at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:392) ~[kafka-clients-1.1.1.jar:na] ... 5 common frames omitted 2019-06-06 22:24:09.336 [main] ERROR com.alibaba.otter.canal.server.CanalMQStarter - ## Something goes wrong when starting up the canal MQ workers: org.apache.kafka.common.KafkaException: Failed to construct kafka producer at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:456) ~[kafka-clients-1.1.1.jar:na] at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:303) ~[kafka-clients-1.1.1.jar:na] at com.alibaba.otter.canal.kafka.CanalKafkaProducer.init(CanalKafkaProducer.java:67) ~[canal.server-1.1.3.jar:na] at com.alibaba.otter.canal.server.CanalMQStarter.start(CanalMQStarter.java:50) ~[canal.server-1.1.3.jar:na] at com.alibaba.otter.canal.deployer.CanalStater.start(CanalStater.java:113) [canal.deployer-1.1.3.jar:na] at com.alibaba.otter.canal.deployer.CanalLauncher.main(CanalLauncher.java:57) [canal.deployer-1.1.3.jar:na] Caused by: org.apache.kafka.common.config.ConfigException: Must set acks to all in order to use the idempotent producer. Otherwise we cannot guarantee idempotence. at org.apache.kafka.clients.producer.KafkaProducer.configureAcks(KafkaProducer.java:533) ~[kafka-clients-1.1.1.jar:na] at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:392) ~[kafka-clients-1.1.1.jar:na] ... 5 common frames omitted

canal消费到binlog看起来没问题,有日志 用命令测试kafka,连接成功,写成功,停掉kafka,cannal会报kafka连接失败的错误 但是kafka中就没有一条数据

==> example/meta.log <== 2019-06-06 23:11:33.235 - clientId:1001 cursor:[mysql-bin.000001,19339529,1559833345000,1,] address[/149.129.216.117:3306] 2019-06-06 23:12:12.236 - clientId:1001 cursor:[mysql-bin.000001,19341066,1559833931000,1,] address[149.129.216.117/149.129.216.117:3306] 2019-06-06 23:12:22.235 - clientId:1001 cursor:[mysql-bin.000001,19342156,1559833941000,1,] address[149.129.216.117/149.129.216.117:3306] 2019-06-06 23:13:40.234 - clientId:1001 cursor:[mysql-bin.000001,19343469,1559834019000,1,] address[149.129.216.117/149.129.216.117:3306] 2019-06-06 23:13:50.234 - clientId:1001 cursor:[mysql-bin.000001,19344562,1559834030000,1,] address[149.129.216.117/149.129.216.117:3306] 2019-06-06 23:13:58.238 - clientId:1001 cursor:[mysql-bin.000001,19346202,1559834037000,1,] address[149.129.216.117/149.129.216.117:3306] 2019-06-06 23:14:09.239 - clientId:1001 cursor:[mysql-bin.000001,19347292,1559834048000,1,] address[149.129.216.117/149.129.216.117:3306] 2019-06-06 23:14:57.241 - clientId:1001 cursor:[mysql-bin.000001,19347837,1559834096000,1,] address[149.129.216.117/149.129.216.117:3306]

==> canal/canal.log <== 2019-06-06 23:15:33.855 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1, transactionalId=canal-transactional-id] Connection to node -1 could not be established. Broker may not be available. 2019-06-06 23:15:33.855 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1, transactionalId=canal-transactional-id] Connection to node -1 could not be established. Broker may not be available. 2019-06-06 23:15:34.966 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1, transactionalId=canal-transactional-id] Connection to node -1 could not be established. Broker may not be available. 2019-06-06 23:15:34.966 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1, transactionalId=canal-transactional-id] Connection to node -1 could not be established. Broker may not be available.

原提问者GitHub用户hookover

展开
收起
云上静思 2023-05-04 13:01:57 150 0
1 条回答
写回答
取消 提交回答
  • 解决了:表格过滤没带数据库名称

    原回答者GitHub用户hookover

    2023-05-05 10:41:24
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
Java Spring Boot开发实战系列课程【第16讲】:Spring Boot 2.0 实战Apache Kafka百万级高并发消息中间件与原理解析 立即下载
MaxCompute技术公开课第四季 之 如何将Kafka数据同步至MaxCompute 立即下载
消息队列kafka介绍 立即下载