I'm witnessing a strange behavior in a .net program :
我在.net程序中见证了一个奇怪的行为:
Console.WriteLine(Int64.MaxValue.ToString());
// displays 9223372036854775807, which is 2^63-1, as expected
Int64 a = 256*256*256*127; // ok
Int64 a = 256*256*256*128; // compile time error :
//"The operation overflows at compile time in checked mode"
// If i do this at runtime, I get some negative values, so the overflow indeed happens.
Why do my Int64's behaves as if they were Int32's, although Int64.MaxValue seems to confirm they're using 64 bits ?
为什么我的Int64的行为就像它们是Int32一样,虽然Int64.MaxValue似乎证实它们使用的是64位?
If it's relevant, I'm using a 32 bit OS, and the target platform is set to "Any CPU"
如果它是相关的,我使用32位操作系统,目标平台设置为“任何CPU”
2 个解决方案
#1
Your RHS is only using Int32
values, so the whole operation is performed using Int32
arithmetic, then the Int32
result is promoted to a long.
您的RHS仅使用Int32值,因此使用Int32算法执行整个操作,然后将Int32结果提升为long。
Change it to this:
把它改成这个:
Int64 a = 256*256*256*128L;
and all will be well.
一切都会好的。
#2
Use:
Int64 a = 256L*256L*256L*128L;
the L suffix means Int64 literal, no suffix means Int32.
L后缀表示Int64文字,无后缀表示Int32。
What your wrote:
你写的:
Int64 a = 256*256*256*128
means:
Int64 a = (Int32)256*(Int32)256*(Int32)256*(Int32)128;
#1
Your RHS is only using Int32
values, so the whole operation is performed using Int32
arithmetic, then the Int32
result is promoted to a long.
您的RHS仅使用Int32值,因此使用Int32算法执行整个操作,然后将Int32结果提升为long。
Change it to this:
把它改成这个:
Int64 a = 256*256*256*128L;
and all will be well.
一切都会好的。
#2
Use:
Int64 a = 256L*256L*256L*128L;
the L suffix means Int64 literal, no suffix means Int32.
L后缀表示Int64文字,无后缀表示Int32。
What your wrote:
你写的:
Int64 a = 256*256*256*128
means:
Int64 a = (Int32)256*(Int32)256*(Int32)256*(Int32)128;