I'm writing an import script that processes a file that has potentially hundreds of thousands of lines (log file). Using a very simple approach (below) took enough time and memory that I felt like it would take out my MBP at any moment, so I killed the process.
我正在编写一个导入脚本来处理可能有数十万行(日志文件)的文件。使用一种非常简单的方法(下面)花了足够的时间和记忆,我觉得它随时会取出我的MBP,所以我杀了这个过程。
#...
File.open(file, 'r') do |f|
f.each_line do |line|
# do stuff here to line
end
end
This file in particular has 642,868 lines:
这个文件特别有642,868行:
$ wc -l nginx.log /code/src/myimport
642868 ../nginx.log
Does anyone know of a more efficient (memory/cpu) way to process each line in this file?
有没有人知道处理这个文件中每一行的更有效(内存/ CPU)方式?
UPDATE
The code inside of the f.each_line
from above is simply matching a regex against the line. If the match fails, I add the line to a @skipped
array. If it passes, I format the matches into a hash (keyed by the "fields" of the match) and append it to a @results
array.
上面f.each_line中的代码只是将正则表达式与行匹配。如果匹配失败,我将该行添加到@skipped数组。如果它通过,我将匹配格式化为哈希(由匹配的“字段”键入)并将其附加到@results数组。
# regex built in `def initialize` (not on each line iteration)
@regex = /(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}) - (.{0})- \[([^\]]+?)\] "(GET|POST|PUT|DELETE) ([^\s]+?) (HTTP\/1\.1)" (\d+) (\d+) "-" "(.*)"/
#... loop lines
match = line.match(@regex)
if match.nil?
@skipped << line
else
@results << convert_to_hash(match)
end
I'm completely open to this being an inefficient process. I could make the code inside of convert_to_hash
use a precomputed lambda instead of figuring out the computation each time. I guess I just assumed it was the line iteration itself that was the problem, not the per-line code.
我对这是一个效率低下的过程完全开放。我可以使convert_to_hash中的代码使用预先计算的lambda,而不是每次都计算出计算结果。我想我只是假设行迭代本身就是问题,而不是每行代码。
3 个解决方案
#1
5
I just did a test on a 600,000 line file and it iterated over the file in less than half a second. I'm guessing the slowness is not in the file looping but the line parsing. Can you paste your parse code also?
我刚刚对600,000行文件进行了测试,并在不到半秒的时间内对文件进行了迭代。我猜测缓慢不是在文件循环中而是在线解析。你也可以粘贴你的解析代码吗?
#2
4
This blogpost includes several approaches to parsing large log files. Maybe thats an inspiration. Also have a look at the file-tail gem
此博客文章包含几种解析大型日志文件的方法。也许这就是灵感。还看一下file-tail gem
#3
1
If you are using bash (or similar) you might be able to optimize like this:
如果您使用bash(或类似),您可以像这样进行优化:
In input.rb:
while x = gets
# Parse
end
then in bash:
然后在bash中:
cat nginx.log | ruby -n input.rb
The -n
flag tells ruby to assume 'while gets(); ... end' loop around your script
, which might cause it to do something special to optimize.
-n标志告诉ruby假设'while gets(); ...结束'循环你的脚本,这可能会导致它做一些特殊的优化。
You might also want to look into a prewritten solution to the problem, as that will be faster.
您可能还想查看问题的预先编写的解决方案,因为这会更快。
#1
5
I just did a test on a 600,000 line file and it iterated over the file in less than half a second. I'm guessing the slowness is not in the file looping but the line parsing. Can you paste your parse code also?
我刚刚对600,000行文件进行了测试,并在不到半秒的时间内对文件进行了迭代。我猜测缓慢不是在文件循环中而是在线解析。你也可以粘贴你的解析代码吗?
#2
4
This blogpost includes several approaches to parsing large log files. Maybe thats an inspiration. Also have a look at the file-tail gem
此博客文章包含几种解析大型日志文件的方法。也许这就是灵感。还看一下file-tail gem
#3
1
If you are using bash (or similar) you might be able to optimize like this:
如果您使用bash(或类似),您可以像这样进行优化:
In input.rb:
while x = gets
# Parse
end
then in bash:
然后在bash中:
cat nginx.log | ruby -n input.rb
The -n
flag tells ruby to assume 'while gets(); ... end' loop around your script
, which might cause it to do something special to optimize.
-n标志告诉ruby假设'while gets(); ...结束'循环你的脚本,这可能会导致它做一些特殊的优化。
You might also want to look into a prewritten solution to the problem, as that will be faster.
您可能还想查看问题的预先编写的解决方案,因为这会更快。