Hi,As part of our monitoring work for our customers, we stumbled upon an issue with our customers' servers who have a wal_keep_segments setting higher than 0.
We have a monitoring script that checks the number of WAL files in the pg_xlog directory, according to the setting of three parameters (checkpoint_completion_target, checkpoint_segments, and wal_keep_segments). We usually add a percentage to the usual formula:
greatest(
(2 + checkpoint_completion_target) * checkpoint_segments + 1,
checkpoint_segments + wal_keep_segments + 1
)
Hi,As part of our monitoring work for our customers, we stumbled upon an issue with our customers' servers who have a wal_keep_segments setting higher than 0.
We have a monitoring script that checks the number of WAL files in the pg_xlog directory, according to the setting of three parameters (checkpoint_completion_target, checkpoint_segments, and wal_keep_segments). We usually add a percentage to the usual formula:
greatest(
(2 + checkpoint_completion_target) * checkpoint_segments + 1,
checkpoint_segments + wal_keep_segments + 1
)I think the first bug is even having this formula in the documentation to start with, and in trying to use it.
"and will normally not be more than..."This may be "normal" for a toy system. I think that the normal state for any system worth monitoring is that it has had load spikes at some point in the past.
So it is the next part of the doc, which describes how many segments it climbs back down to upon recovering from a spike, which is the important one. And that doesn't mention wal_keep_segments at all, which surely cannot be correct.
I will try to independently derive the correct formula from the code, as you did, without looking too much at your derivation first, and see if we get the same answer.
> Monitoring is another matter, and I don't really think a monitoring
> solution should count the WAL files. What actually really matters is the
> database availability, and that is covered with having enough disk space in
> the WALs partition.
If we don't count the WAL files, though, that eliminates the best way to
detecting when archiving is failing.
> Monitoring is another matter, and I don't really think a monitoring
> solution should count the WAL files. What actually really matters is the
> database availability, and that is covered with having enough disk space in
> the WALs partition.If we don't count the WAL files, though, that eliminates the best way to
detecting when archiving is failing.
>> > If we don't count the WAL files, though, that eliminates the best way to
>> > detecting when archiving is failing.
>> >
>> >
> WAL files don't give you this directly. You may think it's an issue to get
> a lot of WAL files, but it can just be a spike of changes. Counting .ready
> files makes more sense when you're trying to see if wal archiving is
> failing. And now, using pg_stat_archiver is the way to go (thanks Gabriele
> :) ).
Yeah, a situation where we can't give our users any kind of reasonable
monitoring threshold at all sucks though. Also, it makes it kind of
hard to allocate a wal partition if it could be 10X the minimum size,
you know?
What happened to the work Heikki was doing on making transaction log
disk usage sane?
On Fri, Aug 8, 2014 at 12:08 AM, Guillaume Lelarge <[hidden email]> wrote:Hi,As part of our monitoring work for our customers, we stumbled upon an issue with our customers' servers who have a wal_keep_segments setting higher than 0.
We have a monitoring script that checks the number of WAL files in the pg_xlog directory, according to the setting of three parameters (checkpoint_completion_target, checkpoint_segments, and wal_keep_segments). We usually add a percentage to the usual formula:
greatest(
(2 + checkpoint_completion_target) * checkpoint_segments + 1,
checkpoint_segments + wal_keep_segments + 1
)I think the first bug is even having this formula in the documentation to start with, and in trying to use it."and will normally not be more than..."This may be "normal" for a toy system. I think that the normal state for any system worth monitoring is that it has had load spikes at some point in the past.So it is the next part of the doc, which describes how many segments it climbs back down to upon recovering from a spike, which is the important one. And that doesn't mention wal_keep_segments at all, which surely cannot be correct.I will try to independently derive the correct formula from the code, as you did, without looking too much at your derivation first, and see if we get the same answer.
> state, would be:
>
> greatest(1 + checkpoint_completion_target) * checkpoint_segments,
> wal_keep_segments) + 1 +
> 2 * checkpoint_segments + 1
I don't think we can assume checkpoint_completion_target is at all
reliable enough to base a maximum calculation on, assuming anything
above the maximum is cause of concern and something to inform the admins
about.
Assuming checkpoint_completion_target is 1 for maximum purposes, how
about:
max(2 * checkpoint_segments, wal_keep_segments) + 2 * checkpoint_segments + 2
On Mon, Nov 3, 2014 at 12:39:26PM -0800, Jeff Janes wrote:
> It looked to me that the formula, when descending from a previously stressed
> state, would be:
>
> greatest(1 + checkpoint_completion_target) * checkpoint_segments,
> wal_keep_segments) + 1 +
> 2 * checkpoint_segments + 1I don't think we can assume checkpoint_completion_target is at all
reliable enough to base a maximum calculation on, assuming anything
above the maximum is cause of concern and something to inform the admins
about.Assuming checkpoint_completion_target is 1 for maximum purposes, how
about:max(2 * checkpoint_segments, wal_keep_segments) + 2 * checkpoint_segments + 2
Sorry for my very late answer. It's been a tough month.2014-11-27 0:00 GMT+01:00 Bruce Momjian <[hidden email]>:On Mon, Nov 3, 2014 at 12:39:26PM -0800, Jeff Janes wrote:
> It looked to me that the formula, when descending from a previously stressed
> state, would be:
>
> greatest(1 + checkpoint_completion_target) * checkpoint_segments,
> wal_keep_segments) + 1 +
> 2 * checkpoint_segments + 1I don't think we can assume checkpoint_completion_target is at all
reliable enough to base a maximum calculation on, assuming anything
above the maximum is cause of concern and something to inform the admins
about.Assuming checkpoint_completion_target is 1 for maximum purposes, how
about:max(2 * checkpoint_segments, wal_keep_segments) + 2 * checkpoint_segments + 2
Seems something I could agree on. At least, it makes sense, and it works for my customers. Although I'm wondering why "+ 2", and not "+ 1". It seems Jeff and you agree on this, so I may have misunderstood something.
On Tue, Dec 30, 2014 at 12:35 AM, Guillaume Lelarge <[hidden email]> wrote:Sorry for my very late answer. It's been a tough month.2014-11-27 0:00 GMT+01:00 Bruce Momjian <[hidden email]>:On Mon, Nov 3, 2014 at 12:39:26PM -0800, Jeff Janes wrote:
> It looked to me that the formula, when descending from a previously stressed
> state, would be:
>
> greatest(1 + checkpoint_completion_target) * checkpoint_segments,
> wal_keep_segments) + 1 +
> 2 * checkpoint_segments + 1I don't think we can assume checkpoint_completion_target is at all
reliable enough to base a maximum calculation on, assuming anything
above the maximum is cause of concern and something to inform the admins
about.Assuming checkpoint_completion_target is 1 for maximum purposes, how
about:max(2 * checkpoint_segments, wal_keep_segments) + 2 * checkpoint_segments + 2
Seems something I could agree on. At least, it makes sense, and it works for my customers. Although I'm wondering why "+ 2", and not "+ 1". It seems Jeff and you agree on this, so I may have misunderstood something.From hazy memory, one +1 comes from the currently active WAL file, which exists but is not counted towards either wal_keep_segments nor towards recycled files. And the other +1 comes from the formula for how many recycled files to retain, which explicitly has a +1 in it.
>
> (1 + checkpoint_completion_target) * checkpoint_segments + 1 +
> max(wal_keep_segments, checkpoint_segments)
Now that we have min_wal_size and max_wal_size in 9.5, I don't see any
value to figuring out the proper formula for backpatching.
注:
- 在pg版本9.1 -> 9.4的官方文档中,计算pg_xlog中日志存放数量的方法均为: ( 2 + checkpoint_completion_target ) * checkpoint_segments + 1,但经过上面各位pg大神的讨论是有问题的,更准确的公式应该是:max(2 * checkpoint_segments, wal_keep_segments) + 2 * checkpoint_segments + 2
- 另外在pg9.5版本中,新添加了min_wal_size和max_wal_size两个参数,通过max_wal_size和checkpoint_completion_target 参数来控制产生多少个XLOG后触发检查点, 通过min_wal_size和max_wal_size参数来控制哪些XLOG可以循环使用。具体内容参见德哥博客文章。
- 看到今年淘宝6月的数据库内核月报中也提到了这个问题,他们是由于wal日志过大发现的问题,最终得出的计算公式和上面可以说就是一样的,只是checkpoint_completion_target 没有取值为1而已,公式为:max(wal_keep_segments, checkpoint_segments + checkpoint_segments*checkpoint_completion_target) + 2 * checkpoint_segments + 1 + 1,有兴趣同学的可以看一下。但远没有上面大神争论来的有意思。