C浮点是否是非确定性的?

时间:2022-10-26 03:19:30

I have read somewhere that there is a source of non-determinism in C double-precision floating point as follows:

我在某处读过C双精度浮点中存在非确定性的来源,如下所示:

  1. The C standard says that 64-bit floats (doubles) are required to produce only about 64-bit accuracy.

    C标准规定64位浮点数(双精度)只需要产生大约64位的精度。

  2. Hardware may do floating point in 80-bit registers.

    硬件可以在80位寄存器中执行浮点运算。

  3. Because of (1), the C compiler is not required to clear the low-order bits of floating-point registers before stuffing a double into the high-order bits.

    由于(1),在将double填充到高位之前,C编译器不需要清除浮点寄存器的低位。

  4. This means YMMV, i.e. small differences in results can happen.

    这意味着YMMV,即结果的微小差异可能发生。

Is there any now-common combination of hardware and software where this really happens? I see in other threads that .net has this problem, but is C doubles via gcc OK? (e.g. I am testing for convergence of successive approximations based on exact equality)

有没有现在常见的硬件和软件组合,这真的发生了吗?我在其他帖子中看到.net有这个问题,但是通过gcc C可以加倍吗? (例如,我正在测试基于完全相等的连续近似的收敛)

3 个解决方案

#1


12  

The behavior on implementations with excess precision, which seems to be the issue you're concerned about, is specified strictly by the standard in most if not all cases. Combined with IEEE 754 (assuming your C implementation follows Annex F) this does not leave room for the kinds of non-determinism you seem to be asking about. In particular, things like x == x (which Mehrdad mentioned in a comment) failing are forbidden since there are rules for when excess precision is kept in an expression and when it is discarded. Explicit casts and assignment to an object are among the operations that drop excess precision and ensure that you're working with the nominal type.

具有过度精度的实现上的行为(这似乎是您关注的问题)在大多数(如果不是全部)情况下严格按标准指定。结合IEEE 754(假设你的C实现遵循附件F),这并没有留下你似乎要问的那种非确定性的空间。特别是,x == x(Mehrdad在评论中提到)失败的东西是禁止的,因为有一些规则可以保留表达式中多余的精度以及何时被丢弃。显式转换和对象的赋值是降低多余精度并确保您使用标称类型的操作。

Note however that there are still a lot of broken compilers out there that don't conform to the standards. GCC intentionally disregards them unless you use -std=c99 or -std=c11 (i.e. the "gnu99" and "gnu11" options are intentionally broken in this regard). And prior to GCC 4.5, correct handling of excess precision was not even supported.

但请注意,仍然存在许多不符合标准的破坏编译器。 GCC故意忽略它们,除非你使用-std = c99或-std = c11(即“gnu99”和“gnu11”选项在这方面有意破坏)。在GCC 4.5之前,甚至不支持正确处理过度精度。

#2


2  

This may happen on Intel x86 code that uses the x87 floating-point unit (except probably 3., which seems bogus. LSB bits will be set to zero.). So the hardware platform is very common, but on the software side use of x87 is dying out in favor of SSE.

这可能发生在使用x87浮点单元的Intel x86代码上(除了可能是3,这似乎是假的.LSB位将设置为零。)。因此硬件平台非常普遍,但在软件方面,x87的使用正在逐渐消失,有利于SSE。

Basically whether a number is represented in 80 or 64 bits is at the whim of the compiler and may change at any point in the code. With for example the consequence that a number which just tested non-zero is now zero. m)

基本上,数字是以80位还是64位表示是编译器的奇思妙想,可能会在代码中的任何位置发生变化。例如,刚刚测试非零的数字现在为零。 M)

See "The pitfalls of verifying floating-point computations", page 8ff.

请参阅第8页的“验证浮点计算的缺陷”。

#3


0  

Testing for exact convergence (or equality) in floating point is usually a bad idea, even with in a totally deterministic environment. FP is an approximate representation to begin with. It is much safer to test for convergence to within a specified epsilon.

即使在完全确定的环境中,测试浮点中的精确收敛(或相等)通常也是一个坏主意。 FP是一个近似的表示。测试收敛到指定的epsilon更安全。

#1


12  

The behavior on implementations with excess precision, which seems to be the issue you're concerned about, is specified strictly by the standard in most if not all cases. Combined with IEEE 754 (assuming your C implementation follows Annex F) this does not leave room for the kinds of non-determinism you seem to be asking about. In particular, things like x == x (which Mehrdad mentioned in a comment) failing are forbidden since there are rules for when excess precision is kept in an expression and when it is discarded. Explicit casts and assignment to an object are among the operations that drop excess precision and ensure that you're working with the nominal type.

具有过度精度的实现上的行为(这似乎是您关注的问题)在大多数(如果不是全部)情况下严格按标准指定。结合IEEE 754(假设你的C实现遵循附件F),这并没有留下你似乎要问的那种非确定性的空间。特别是,x == x(Mehrdad在评论中提到)失败的东西是禁止的,因为有一些规则可以保留表达式中多余的精度以及何时被丢弃。显式转换和对象的赋值是降低多余精度并确保您使用标称类型的操作。

Note however that there are still a lot of broken compilers out there that don't conform to the standards. GCC intentionally disregards them unless you use -std=c99 or -std=c11 (i.e. the "gnu99" and "gnu11" options are intentionally broken in this regard). And prior to GCC 4.5, correct handling of excess precision was not even supported.

但请注意,仍然存在许多不符合标准的破坏编译器。 GCC故意忽略它们,除非你使用-std = c99或-std = c11(即“gnu99”和“gnu11”选项在这方面有意破坏)。在GCC 4.5之前,甚至不支持正确处理过度精度。

#2


2  

This may happen on Intel x86 code that uses the x87 floating-point unit (except probably 3., which seems bogus. LSB bits will be set to zero.). So the hardware platform is very common, but on the software side use of x87 is dying out in favor of SSE.

这可能发生在使用x87浮点单元的Intel x86代码上(除了可能是3,这似乎是假的.LSB位将设置为零。)。因此硬件平台非常普遍,但在软件方面,x87的使用正在逐渐消失,有利于SSE。

Basically whether a number is represented in 80 or 64 bits is at the whim of the compiler and may change at any point in the code. With for example the consequence that a number which just tested non-zero is now zero. m)

基本上,数字是以80位还是64位表示是编译器的奇思妙想,可能会在代码中的任何位置发生变化。例如,刚刚测试非零的数字现在为零。 M)

See "The pitfalls of verifying floating-point computations", page 8ff.

请参阅第8页的“验证浮点计算的缺陷”。

#3


0  

Testing for exact convergence (or equality) in floating point is usually a bad idea, even with in a totally deterministic environment. FP is an approximate representation to begin with. It is much safer to test for convergence to within a specified epsilon.

即使在完全确定的环境中,测试浮点中的精确收敛(或相等)通常也是一个坏主意。 FP是一个近似的表示。测试收敛到指定的epsilon更安全。