filebeat,output到elasticsearch的原生格式是什么样的,为什么设定的pipeline之后,一直都没有新数据导入es
filebeat.yml如下
```
filebeat.config:
inputs:
# Mounted `filebeat-inputs` configmap:
path: ${path.config}/inputs.d/*.yml
# Reload inputs configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
# To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# hints.enabled: true
processors:
- add_cloud_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output:
elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
# username: ${ELASTICSEARCH_USERNAME}
# password: ${ELASTICSEARCH_PASSWORD}
pipelines:
- pipeline: "nginx"
when.contains:
kubernetes.container.name: "nginx-"
- pipeline: "java"
when.contains:
kubernetes.container.name: "java-"
- pipeline: "default"
when.contains:
kubernetes.container.name: ""
```
定义的pipeline如下
```
PUT /_ingest/pipeline/java
{
"description": "[0]java[1]nginx[last]通用规则",
"processors": [{
"grok": {
"field": "message",
"patterns": [
"\\[%{LOGLEVEL:level}\\s+?\\]\\[(?<date>\\d{4}-\\d{2}-\\d{2}\\s\\d{2}:\\d{2}:\\d{2},\\d{3})\\]\\[(?<thread>[A-Za-z0-9/-]+?)\\]\\[%{JAVACLASS:class}\\]\\[(?<msg>[\\s\\S]*?)\\]\\[(?<stack>.*?)\\]"
]
},"remove": {
"field": "message"
}
}]
}
PUT /_ingest/pipeline/nginx
{
"description": "[0]java[1]nginx[last]通用规则",
"processors": [{
"grok": {
"field": "message",
"patterns": [
"%{IP:client} - - \\[(?<date>.*?)\\] \"(?<method>[A-Za-z]+?) (?<url>.*?)\" %{NUMBER:statuscode} %{NUMBER:duration} \"(?<refer>.*?)\" \"(?<user-agent>.*?)\""
]
},"remove": {
"field": "message"
}
}]
}
PUT /_ingest/pipeline/default
{
"description": "[0]java[1]nginx[last]通用规则",
"processors": []
}
```
不加pipeline的话,filebeat产生的文档大概是这样
```
{
"_index": "filebeat-6.5.0-2018.11.30",
"_type": "doc",
"_id": "SNk_ZGcB8a-m5zOt7kpn",
"_version": 1,
"_score": null,
"_source": {
"@timestamp": "2018-11-30T10:52:50.164Z",
"input": {
"type": "docker"
},
"beat": {
"name": "filebeat-sfr98",
"hostname": "filebeat-sfr98",
"version": "6.5.0"
},
"host": {
"name": "filebeat-sfr98"
},
"offset": 42985020,
"message": "18:52:50.164 [http-nio-8080-exec-37] INFO c.a.goods.proxy.GoodsGetServiceProxy - ------ request_id=518137347443785728,zdid=42,gid=106059784,从缓存中获取数据:成功 ------",
"prospector": {
"type": "docker"
},
"kubernetes": {
"container": {
"name": "java-goods-get-deployment-0"
},
"namespace": "java",
"replicaset": {
"name": "java-goods-get-deployment-d96f55595"
},
"labels": {
"app": "java-goods-get",
"pod-template-hash": "852911151"
},
"pod": {
"name": "java-goods-get-deployment-d96f55595-7mz9s"
},
"node": {
"name": "cn-shenzhen.i-abcdasfasdfsadf"
}
},
"meta": {
"cloud": {
"instance_id": "i-abcdasfasdfsadf",
"region": "cn-shenzhen",
"availability_zone": "cn-shenzhen-d",
"provider": "ecs"
}
},
"source": "/var/lib/docker/containers/cb5df67aa499155118e232d6369c6210923b4ff1346b2f418e412728f7ed1b2b/cb5df67aa499155118e232d6369c6210923b4ff1346b2f418e412728f7ed1b2b-json.log",
"stream": "stdout"
},
"fields": {
"@timestamp": [
"2018-11-30T10:52:50.164Z"
]
},
"sort": [
1543575170164
]
}
```
为什么我设定了针对message字段的pipeline之后,一直都没有新的数据产生,要怎么解决