请问Elasticsearch出现下面的报错怎么办呢?
com.alibaba.datax.common.exception.DataXException: Code:[ESReader-09], Description:[Reading index / type data exception]. - {"root_cause":[{"type":"circuit_breaking_exception","reason":"[fielddata] Data too large, data for [_id] would be [3097501059/2.8gb], which is larger than the limit of [3097362432/2.8gb]","bytes_wanted":3097501059,"bytes_limit":3097362432,"durability":"PERMANENT"},{"type":"circuit_breaking_exception","reason":"[fielddata] Data too large, data for [_id] would be [3101710554/2.8gb], which is larger than the limit of [3097362432/2.8gb]","bytes_wanted":3101710554,"bytes_limit":3097362432,"durability":"PERMANENT"}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"devicedata_2022-07","node":"XicIpBGcTP-Nl9uXPBfYNw","reason":{"type":"exception","reason":"java.util.concurrent.ExecutionException: CircuitBreakingException[[fielddata] Data too large, data for [_id] would be [3097501059/2.8gb], which is larger than the limit of [3097362432/2.8gb]]","caused_by":{"type":"execution_exception","reason":"execution_exception: CircuitBreakingException[[fielddata] Data too large, data for [_id] would be [3097501059/2.8gb], which is larger than the limit of [3097362432/2.8gb]]","caused_by":{"type":"circuit_breaking_exception","reason":"[fielddata] Data too large, data for [_id] would be [3097501059/2.8gb], which is larger than the limit of [3097362432/2.8gb]","bytes_wanted":3097501059,"bytes_limit":3097362432,"durability":"PERMANENT"}}}},{"shard":1,"index":"devicedata_2022-07","node":"lTpJ0vx9SX-7H54FsvEYlA","reason":{"type":"exception","reason":"java.util.concurrent.ExecutionException: CircuitBreakingException[[fielddata] Data too large, data for [_id] would be [3101710554/2.8gb], which is larger than the limit of [3097362432/2.8gb]]","caused_by":{"type":"execution_exception","reason":"execution_exception: CircuitBreakingException[[fielddata] Data too large, data for [_id] would be [3101710554/2.8gb], which is larger than the limit of [3097362432/2.8gb]]","caused_by":{"type":"circuit_breaking_exception","reason":"[fielddata] Data too large, data for [_id] would be [3101710554/2.8gb], which is larger than the limit of [3097362432/2.8gb]","bytes_wanted":3101710554,"bytes_limit":3097362432,"durability":"PERMANENT"}}}}],"caused_by":{"type":"circuit_breaking_exception","reason":"[fielddata] Data too large, data for [_id] would be [3097501059/2.8gb], which is larger than the limit of [3097362432/2.8gb]","bytes_wanted":3097501059,"bytes_limit":3097362432,"durability":"PERMANEN
T```
"}}
1.优化成bulk写入 2.索引刷新时间优化 3.translog flush 间隔调整 4.自增ID 5.升级配置 主要原因是_id 这个字段,它没有开启doc value,所以可能会占用fielddata触发熔断,尽量避免对这个字段的聚合排序等操作。如果需要排序,请另外加个id字段,启用doc value 此答案整理自钉钉群“Elasticsearch中文技术社区”
版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。