如何从s3收集日志文件? elasticbeanstalk

时间:2020-12-19 23:02:50

I've enable log files rotations to amazon s3, every hour amazon create file "var_log_httpd_rotated_error_log.gz" for every instance at my elastic beanstalk environment.

我已经启用日志文件旋转到amazon s3,每小时amazon为我的弹性beanstalk环境中的每个实例创建文件“var_log_httpd_rotated_error_log.gz”。

first question : the log files will not overlap ? so every time amazon save the file at s3, it also delete it from the instance and create a new one ! right ?

第一个问题:日志文件不会重叠?所以每次亚马逊在s3保存文件时,它也会从实例中删除它并创建一个新文件!对 ?

second question : How could I collect all that files, I want to build a sever that collect all that files and enable me to search for text at those files !

第二个问题:我如何收集所有文件,我想构建一个收集所有文件的服务器,并使我能够搜索这些文件的文本!

1 个解决方案

#1


  1. That is what rotating means. It stops writing to one file and begins writing to a new file.

    这就是旋转的意思。它停止写入一个文件并开始写入新文件。

  2. If they are being uploaded to s3, you can write code to download and index those files. Splunk or Loggly may help you here.

    如果将它们上传到s3,您可以编写代码来下载和索引这些文件。 Splunk或Loggly可以帮到你。

#1


  1. That is what rotating means. It stops writing to one file and begins writing to a new file.

    这就是旋转的意思。它停止写入一个文件并开始写入新文件。

  2. If they are being uploaded to s3, you can write code to download and index those files. Splunk or Loggly may help you here.

    如果将它们上传到s3,您可以编写代码来下载和索引这些文件。 Splunk或Loggly可以帮到你。