elk和filebeat将位于何处

时间:2023-02-12 11:34:11

I am working in a distributed environment.. I have a central machine which needs to monitor some 100 machines. So I need to use ELK stack and keep monitoring the data.

我在一个分布式的环境中工作。我有一台*机器,需要监控大约100台机器。所以我需要使用ELK栈来监控数据。

Since elasticsearch, logstash,kibana and filebeat are independent softwares, i want to know where should i ideally place them in my distributed environment.

因为elasticsearch、loghide、kibana和filebeat都是独立的软件,所以我想知道在我的分布式环境中应该将它们放在什么位置。

My approach was to keep kibana, elasticsearch in the central node and keep logstash and filebeat at individual nodes.

我的方法是在中心节点中保留kibana、elasticsearch,并在单个节点中保留loghide和filebeat。

Logstash will send data to central node's elasticsearch search which kibana displays it.

logstorage将把数据发送到kibana所显示的*节点的弹性搜索。

Please let me know if this design is right.

请告诉我这个设计是否正确。

1 个解决方案

#1


0  

Your design is not that bad but if you install elasticsearch on only one server, with time you will face the problem of availability.

您的设计并不是那么糟糕,但是如果您只在一个服务器上安装了弹性搜索,那么随着时间的推移,您将面临可用性的问题。

You can do this:

你可以这样做:

  1. Install filebeat and logstash on all the nodes.
  2. 在所有节点上安装filebeat和loghide。
  3. Install elasticsearch as a cluster. That way if one node of elasticsearch goes down, another node can easily take over.
  4. 将弹性搜索安装为集群。这样,如果一个节点的弹性搜索下降,另一个节点可以轻松接管。
  5. Install Kibana on the central node.
  6. 在中心节点上安装Kibana。

NB:

注:

  • Make sure you configure filebeat to point to more than one logstash server. By so doing, if one logstash fails, filebeat can still ships logs to another server.
  • 确保将filebeat配置为指向多个日志存储服务器。这样,如果一个日志隐藏失败,filebeat仍然可以将日志发送到另一个服务器。
  • Also make sure your configuration of logstash points to all the node.data of your elasticsearch cluster.
  • 还要确保您对所有节点的日志隐藏点的配置。您的弹性搜索集群的数据。

You can also go further by installing kibana on says 3 nodes and attaching a load balancer to it. That way your load balancer will choose the instance of kibana that is healthy and display it.

您还可以在3个节点上安装kibana并将负载均衡器附加到它上。这样,负载均衡器就会选择一个健康的kibana实例并显示它。

UPDATE

更新

With elasticsearch configured, we can configure logstash as follows:

通过配置elasticsearch,我们可以配置logreserves如下:

output {
    elasticsearch{
        hosts => ["http://123.456.789.1:9200","http://123.456.789.2:9200"]
        index => "indexname"
    }
}

You don't need to add stdout { codec => rubydebug } in your configuration.

您不需要在配置中添加stdout {codec => rubydebug}。

Hope this helps.

希望这个有帮助。

#1


0  

Your design is not that bad but if you install elasticsearch on only one server, with time you will face the problem of availability.

您的设计并不是那么糟糕,但是如果您只在一个服务器上安装了弹性搜索,那么随着时间的推移,您将面临可用性的问题。

You can do this:

你可以这样做:

  1. Install filebeat and logstash on all the nodes.
  2. 在所有节点上安装filebeat和loghide。
  3. Install elasticsearch as a cluster. That way if one node of elasticsearch goes down, another node can easily take over.
  4. 将弹性搜索安装为集群。这样,如果一个节点的弹性搜索下降,另一个节点可以轻松接管。
  5. Install Kibana on the central node.
  6. 在中心节点上安装Kibana。

NB:

注:

  • Make sure you configure filebeat to point to more than one logstash server. By so doing, if one logstash fails, filebeat can still ships logs to another server.
  • 确保将filebeat配置为指向多个日志存储服务器。这样,如果一个日志隐藏失败,filebeat仍然可以将日志发送到另一个服务器。
  • Also make sure your configuration of logstash points to all the node.data of your elasticsearch cluster.
  • 还要确保您对所有节点的日志隐藏点的配置。您的弹性搜索集群的数据。

You can also go further by installing kibana on says 3 nodes and attaching a load balancer to it. That way your load balancer will choose the instance of kibana that is healthy and display it.

您还可以在3个节点上安装kibana并将负载均衡器附加到它上。这样,负载均衡器就会选择一个健康的kibana实例并显示它。

UPDATE

更新

With elasticsearch configured, we can configure logstash as follows:

通过配置elasticsearch,我们可以配置logreserves如下:

output {
    elasticsearch{
        hosts => ["http://123.456.789.1:9200","http://123.456.789.2:9200"]
        index => "indexname"
    }
}

You don't need to add stdout { codec => rubydebug } in your configuration.

您不需要在配置中添加stdout {codec => rubydebug}。

Hope this helps.

希望这个有帮助。