canal v1.1.3 canal adapter v1.1.3 hbase v1.0.3
使用rocketMQ模式, adapter可以接收到消息 INFO c.a.o.canal.client.adapter.logger.LoggerAdapterExample - DML: {"data":[{"s":"122345546786","i":36}],"database":"mytest","destination":"example","es":1561555721000,"groupId":"g1","isDdl":false,"old":[{"s":"546786897893"}],"pkNames":["i"],"sql":"","table":"foo","ts":1561555722080,"type":"UPDATE"}
但是写入不聊hbase, 并且adapter报错: ERROR c.a.o.c.a.launcher.loader.CanalAdapterRocketMQWorker - null Error sync but ACK!
一下是配置文件: canal adapter application.yml : server: port: 8081 spring: jackson: date-format: yyyy-MM-dd HH:mm:ss time-zone: GMT+8 default-property-inclusion: non_null
canal.conf: mode: rocketMQ #tcp # kafka rocketMQ canalServerHost: 127.0.0.1:11111 mqServers: 127.0.0.1:9876 #127.0.0.1:9092 #or rocketmq batchSize: 500 syncBatchSize: 1000 retries: 0 timeout: accessKey: secretKey: srcDataSources: defaultDS: url: jdbc:mysql://127.0.0.1:3306/mytest?useUnicode=true username: root password: root canalAdapters:
instance: example # canal instance Name or mq topic name groups: groupId: g1 outerAdapters: name: logger name: hbase properties: hbase.zookeeper.quorum: 192.168.1.240 hbase.zookeeper.property.clientPort: 2181 zookeeper.znode.parent: /hbase cat conf/hbase/mytest_foo.yml
dataSourceKey: defaultDS destination: example groupId: g1 hbaseMapping: mode: STRING #NATIVE #PHOENIX database: mytest # 数据库名 table: foo # 数据库表名 hbaseTable: MYTEST.FOO # HBase表名 family: CF # 默认统一Family名称 uppercaseQualifier: true # 字段名转大写, 默认为true commitBatch: 1 # 批量提交的大小
原提问者GitHub用户elfc
Error sync but ACK!
这个异常是指client处理数据超过了retry次数之后,还没成功就忽略了。需要找一下为啥写入有异常
原回答者GitHub用户agapple
版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。