I've got a JSON of the format:
我有一个格式的JSON:
{
"SOURCE":"Source A",
"Model":"ModelABC",
"Qty":"3"
}
I'm trying to parse this JSON using logstash. Basically I want the logstash output to be a list of key:value pairs that I can analyze using kibana. I thought this could be done out of the box. From a lot of reading, I understand I must use the grok plugin (I am still not sure what the json plugin is for). But I am unable to get an event with all the fields. I get multiple events (one even for each attribute of my JSON). Like so:
我正在尝试使用logstash解析此JSON。基本上我希望logstash输出是一个key:value对的列表,我可以使用kibana进行分析。我认为这可以开箱即用。从很多阅读中,我明白我必须使用grok插件(我仍然不确定json插件的用途)。但我无法获得所有领域的活动。我得到了多个事件(甚至对于我的JSON的每个属性都有一个)。像这样:
{
"message" => " \"SOURCE\": \"Source A\",",
"@version" => "1",
"@timestamp" => "2014-08-31T01:26:23.432Z",
"type" => "my-json",
"tags" => [
[0] "tag-json"
],
"host" => "myserver.example.com",
"path" => "/opt/mount/ELK/json/mytestjson.json"
}
{
"message" => " \"Model\": \"ModelABC\",",
"@version" => "1",
"@timestamp" => "2014-08-31T01:26:23.438Z",
"type" => "my-json",
"tags" => [
[0] "tag-json"
],
"host" => "myserver.example.com",
"path" => "/opt/mount/ELK/json/mytestjson.json"
}
{
"message" => " \"Qty\": \"3\",",
"@version" => "1",
"@timestamp" => "2014-08-31T01:26:23.438Z",
"type" => "my-json",
"tags" => [
[0] "tag-json"
],
"host" => "myserver.example.com",
"path" => "/opt/mount/ELK/json/mytestjson.json"
}
Should I use the multiline codec or the json_lines codec? If so, how can I do that? Do I need to write my own grok pattern or is there something generic for JSONs that will give me ONE EVENT with key:value pairs that I get for one event above? I couldn't find any documentation that sheds light on this. Any help would be appreciated. My conf file is shown below:
我应该使用多行编解码器还是json_lines编解码器?如果是这样,我该怎么办?我是否需要编写自己的grok模式,或者是否存在一些通用的JSON,它会给我一个事件,其中包含我在上面一个事件中得到的键值对?我找不到任何有关这方面的文件。任何帮助,将不胜感激。我的conf文件如下所示:
input
{
file
{
type => "my-json"
path => ["/opt/mount/ELK/json/mytestjson.json"]
codec => json
tags => "tag-json"
}
}
filter
{
if [type] == "my-json"
{
date { locale => "en" match => [ "RECEIVE-TIMESTAMP", "yyyy-mm-dd HH:mm:ss" ] }
}
}
output
{
elasticsearch
{
host => localhost
}
stdout { codec => rubydebug }
}
2 个解决方案
#1
5
I think I found a working answer to my problem. I am not sure if it's a clean solution, but it helps parse multiline JSONs of the type above.
我想我找到了解决问题的方法。我不确定它是否是一个干净的解决方案,但它有助于解析上述类型的多行JSON。
input
{
file
{
codec => multiline
{
pattern => '^\{'
negate => true
what => previous
}
path => ["/opt/mount/ELK/json/*.json"]
start_position => "beginning"
sincedb_path => "/dev/null"
exclude => "*.gz"
}
}
filter
{
mutate
{
replace => [ "message", "%{message}}" ]
gsub => [ 'message','\n','']
}
if [message] =~ /^{.*}$/
{
json { source => message }
}
}
output
{
stdout { codec => rubydebug }
}
My mutliline codec doesn't handle the last brace and therefore it doesn't appear as a JSON to json { source => message }
. Hence the mutate filter:
我的mutliline编解码器不处理最后一个大括号,因此它不会显示为json {source => message}的JSON。因此mutate过滤器:
replace => [ "message", "%{message}}" ]
That adds the missing brace. and the
这增加了缺失的支撑。和
gsub => [ 'message','\n','']
removes the \n
characters that are introduced. At the end of it, I have a one-line JSON that can be read by json { source => message }
删除引入的\ n字符。最后,我有一个可以通过json {source => message}读取的单行JSON
If there's a cleaner/easier way to convert the original multi-line JSON to a one-line JSON, please do POST as I feel the above isn't too clean.
如果有一种更简洁/更简单的方法将原始多行JSON转换为单行JSON,请执行POST,因为我觉得上面的内容不太干净。
#2
4
You will need to use a multiline
codec.
您将需要使用多行编解码器。
input {
file {
codec => multiline {
pattern => '^{'
negate => true
what => previous
}
path => ['/opt/mount/ELK/json/mytestjson.json']
}
}
filter {
json {
source => message
remove_field => message
}
}
The problem you will run into has to do with the last event in the file. It won't show up till there is another event in the file (so basically you'll lose the last event in a file) -- you could append a single {
to the file before it gets rotated to deal with that situation.
您将遇到的问题与文件中的最后一个事件有关。它将不会显示,直到文件中有另一个事件(所以基本上你将丢失文件中的最后一个事件) - 你可以在文件被旋转之前附加一个{来处理这种情况。
#1
5
I think I found a working answer to my problem. I am not sure if it's a clean solution, but it helps parse multiline JSONs of the type above.
我想我找到了解决问题的方法。我不确定它是否是一个干净的解决方案,但它有助于解析上述类型的多行JSON。
input
{
file
{
codec => multiline
{
pattern => '^\{'
negate => true
what => previous
}
path => ["/opt/mount/ELK/json/*.json"]
start_position => "beginning"
sincedb_path => "/dev/null"
exclude => "*.gz"
}
}
filter
{
mutate
{
replace => [ "message", "%{message}}" ]
gsub => [ 'message','\n','']
}
if [message] =~ /^{.*}$/
{
json { source => message }
}
}
output
{
stdout { codec => rubydebug }
}
My mutliline codec doesn't handle the last brace and therefore it doesn't appear as a JSON to json { source => message }
. Hence the mutate filter:
我的mutliline编解码器不处理最后一个大括号,因此它不会显示为json {source => message}的JSON。因此mutate过滤器:
replace => [ "message", "%{message}}" ]
That adds the missing brace. and the
这增加了缺失的支撑。和
gsub => [ 'message','\n','']
removes the \n
characters that are introduced. At the end of it, I have a one-line JSON that can be read by json { source => message }
删除引入的\ n字符。最后,我有一个可以通过json {source => message}读取的单行JSON
If there's a cleaner/easier way to convert the original multi-line JSON to a one-line JSON, please do POST as I feel the above isn't too clean.
如果有一种更简洁/更简单的方法将原始多行JSON转换为单行JSON,请执行POST,因为我觉得上面的内容不太干净。
#2
4
You will need to use a multiline
codec.
您将需要使用多行编解码器。
input {
file {
codec => multiline {
pattern => '^{'
negate => true
what => previous
}
path => ['/opt/mount/ELK/json/mytestjson.json']
}
}
filter {
json {
source => message
remove_field => message
}
}
The problem you will run into has to do with the last event in the file. It won't show up till there is another event in the file (so basically you'll lose the last event in a file) -- you could append a single {
to the file before it gets rotated to deal with that situation.
您将遇到的问题与文件中的最后一个事件有关。它将不会显示,直到文件中有另一个事件(所以基本上你将丢失文件中的最后一个事件) - 你可以在文件被旋转之前附加一个{来处理这种情况。