C #에서 출력을 위해 double 형식 지정
.NET에서 이중 곱셈이 끊어지는 것과 관련된 빠른 실험을 실행하고 있습니까? C # 문자열 형식화에 대한 몇 가지 기사를 읽고 다음과 같이 생각했습니다.
{
double i = 10 * 0.69;
Console.WriteLine(i);
Console.WriteLine(String.Format(" {0:F20}", i));
Console.WriteLine(String.Format("+ {0:F20}", 6.9 - i));
Console.WriteLine(String.Format("= {0:F20}", 6.9));
}
이 C 코드와 동일한 C #이 될 것입니다.
{
double i = 10 * 0.69;
printf ( "%f\n", i );
printf ( " %.20f\n", i );
printf ( "+ %.20f\n", 6.9 - i );
printf ( "= %.20f\n", 6.9 );
}
그러나 C #은 출력을 생성합니다.
6.9
6.90000000000000000000
+ 0.00000000000000088818
= 6.90000000000000000000
나는 디버거에서 6.89999999999999946709 (6.9가 아닌) 값과 동일하게 표시되지만.
형식에서 요청한 정밀도를 보여주는 C와 비교 :
6.900000
6.89999999999999946709
+ 0.00000000000000088818
= 6.90000000000000035527
무슨 일이야?
(Microsoft .NET Framework 버전 3.51 SP1 / Visual Studio C # 2008 Express Edition)
저는 수치 계산에 대한 배경 지식이 있으며 다양한 플랫폼에서 복잡한 수치 시스템의 정밀도 한계로 인한 오류를 추정하는 기술인 간격 산술을 구현 한 경험이 있습니다. 현상금을 얻으려면 저장 정밀도에 대해 설명하지 마십시오.이 경우 64 비트 double의 ULP 하나의 차이입니다.
현상금을 얻으려면 .Net이 C 코드에서 볼 수있는 요청 된 정밀도로 double 형식을 지정할 수있는 방법 (또는 여부)을 알고 싶습니다.
문제는 .NET이 형식에서 요청한 정밀도와 이진 숫자의 정확한 10 진수 값에 관계없이 형식을 적용 하기 전에 항상 double
유효 10 진수 15 자리로 반올림한다는 것 입니다.
Visual Studio 디버거에는 내부 이진 번호에 직접 액세스하는 자체 형식 / 디스플레이 루틴이 있으므로 C # 코드, C 코드 및 디버거 간의 불일치가 있다고 생각합니다.
의 정확한 십진수 값에 액세스 double
하거나 a double
를 특정 소수 자릿수 로 형식화 할 수있는 기본 제공 기능은 없지만 내부 이진수를 선택하고 다음과 같이 다시 작성하여 직접 수행 할 수 있습니다. 10 진수 값의 문자열 표현.
또는 Jon Skeet의 DoubleConverter
클래스 ( "Binary 부동 소수점 및 .NET"기사 에서 링크 됨)를 사용할 수 있습니다 . 이것은 ToExactString
의 정확한 십진수 값을 반환 하는 메서드를 가지고 double
있습니다. 출력을 특정 정밀도로 반올림 할 수 있도록 쉽게 수정할 수 있습니다.
double i = 10 * 0.69;
Console.WriteLine(DoubleConverter.ToExactString(i));
Console.WriteLine(DoubleConverter.ToExactString(6.9 - i));
Console.WriteLine(DoubleConverter.ToExactString(6.9));
// 6.89999999999999946709294817992486059665679931640625
// 0.00000000000000088817841970012523233890533447265625
// 6.9000000000000003552713678800500929355621337890625
Digits after decimal point
// just two decimal places
String.Format("{0:0.00}", 123.4567); // "123.46"
String.Format("{0:0.00}", 123.4); // "123.40"
String.Format("{0:0.00}", 123.0); // "123.00"
// max. two decimal places
String.Format("{0:0.##}", 123.4567); // "123.46"
String.Format("{0:0.##}", 123.4); // "123.4"
String.Format("{0:0.##}", 123.0); // "123"
// at least two digits before decimal point
String.Format("{0:00.0}", 123.4567); // "123.5"
String.Format("{0:00.0}", 23.4567); // "23.5"
String.Format("{0:00.0}", 3.4567); // "03.5"
String.Format("{0:00.0}", -3.4567); // "-03.5"
Thousands separator
String.Format("{0:0,0.0}", 12345.67); // "12,345.7"
String.Format("{0:0,0}", 12345.67); // "12,346"
Zero
Following code shows how can be formatted a zero (of double type).
String.Format("{0:0.0}", 0.0); // "0.0"
String.Format("{0:0.#}", 0.0); // "0"
String.Format("{0:#.0}", 0.0); // ".0"
String.Format("{0:#.#}", 0.0); // ""
Align numbers with spaces
String.Format("{0,10:0.0}", 123.4567); // " 123.5"
String.Format("{0,-10:0.0}", 123.4567); // "123.5 "
String.Format("{0,10:0.0}", -123.4567); // " -123.5"
String.Format("{0,-10:0.0}", -123.4567); // "-123.5 "
Custom formatting for negative numbers and zero
String.Format("{0:0.00;minus 0.00;zero}", 123.4567); // "123.46"
String.Format("{0:0.00;minus 0.00;zero}", -123.4567); // "minus 123.46"
String.Format("{0:0.00;minus 0.00;zero}", 0.0); // "zero"
Some funny examples
String.Format("{0:my number is 0.0}", 12.3); // "my number is 12.3"
String.Format("{0:0aaa.bbb0}", 12.3);
이 MSDN 참조를 살펴보십시오 . 메모에서 숫자는 요청 된 소수점 이하 자릿수로 반올림됩니다.
대신 "{0 : R}"을 사용하면 "왕복"값이라고하는 값이 생성 됩니다. 자세한 내용 은이 MSDN 참조 를 참조 하십시오. 여기에 내 코드와 출력이 있습니다.
double d = 10 * 0.69;
Console.WriteLine(" {0:R}", d);
Console.WriteLine("+ {0:F20}", 6.9 - d);
Console.WriteLine("= {0:F20}", 6.9);
산출
6.8999999999999995
+ 0.00000000000000088818
= 6.90000000000000000000
Though this question is meanwhile closed, I believe it is worth mentioning how this atrocity came into existence. In a way, you may blame the C# spec, which states that a double must have a precision of 15 or 16 digits (the result of IEEE-754). A bit further on (section 4.1.6) it's stated that implementations are allowed to use higher precision. Mind you: higher, not lower. They are even allowed to deviate from IEEE-754: expressions of the type x * y / z
where x * y
would yield +/-INF
but would be in a valid range after dividing, do not have to result in an error. This feature makes it easier for compilers to use higher precision in architectures where that'd yield better performance.
But I promised a "reason". Here's a quote (you requested a resource in one of your recent comments) from the Shared Source CLI, in clr/src/vm/comnumber.cpp
:
"In order to give numbers that are both friendly to display and round-trippable, we parse the number using 15 digits and then determine if it round trips to the same value. If it does, we convert that NUMBER to a string, otherwise we reparse using 17 digits and display that."
In other words: MS's CLI Development Team decided to be both round-trippable and show pretty values that aren't such a pain to read. Good or bad? I'd wish for an opt-in or opt-out.
The trick it does to find out this round-trippability of any given number? Conversion to a generic NUMBER structure (which has separate fields for the properties of a double) and back, and then compare whether the result is different. If it is different, the exact value is used (as in your middle value with 6.9 - i
) if it is the same, the "pretty value" is used.
As you already remarked in a comment to Andyp, 6.90...00
is bitwise equal to 6.89...9467
. And now you know why 0.0...8818
is used: it is bitwise different from 0.0
.
This 15 digits barrier is hard-coded and can only be changed by recompiling the CLI, by using Mono or by calling Microsoft and convincing them to add an option to print full "precision" (it is not really precision, but by the lack of a better word). It's probably easier to just calculate the 52 bits precision yourself or use the library mentioned earlier.
EDIT: if you like to experiment yourself with IEE-754 floating points, consider this online tool, which shows you all relevant parts of a floating point.
Use
Console.WriteLine(String.Format(" {0:G17}", i));
That will give you all the 17 digits it have. By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally. {0:R} will not always give you 17 digits, it will give 15 if the number can be represented with that precision.
which returns 15 digits if the number can be represented with that precision or 17 digits if the number can only be represented with maximum precision. There isn't any thing you can to do to make the the double return more digits that is the way it's implemented. If you don't like it do a new double class yourself...
.NET's double cant store any more digits than 17 so you cant see 6.89999999999999946709 in the debugger you would see 6.8999999999999995. Please provide an image to prove us wrong.
The answer to this is simple and can be found on MSDN
Remember that a floating-point number can only approximate a decimal number, and that the precision of a floating-point number determines how accurately that number approximates a decimal number. By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.
In your example, the value of i is 6.89999999999999946709 which has the number 9 for all positions between the 3rd and the 16th digit (remember to count the integer part in the digits). When converting to string, the framework rounds the number to the 15th digit.
i = 6.89999999999999 946709
digit = 111111 111122
1 23456789012345 678901
i tried to reproduce your findings, but when I watched 'i' in the debugger it showed up as '6.8999999999999995' not as '6.89999999999999946709' as you wrote in the question. Can you provide steps to reproduce what you saw?
To see what the debugger shows you, you can use a DoubleConverter as in the following line of code:
Console.WriteLine(TypeDescriptor.GetConverter(i).ConvertTo(i, typeof(string)));
Hope this helps!
Edit: I guess I'm more tired than I thought, of course this is the same as formatting to the roundtrip value (as mentioned before).
The answer is yes, double printing is broken in .NET, they are printing trailing garbage digits.
You can read how to implement it correctly here.
I have had to do the same for IronScheme.
> (* 10.0 0.69)
6.8999999999999995
> 6.89999999999999946709
6.8999999999999995
> (- 6.9 (* 10.0 0.69))
8.881784197001252e-16
> 6.9
6.9
> (- 6.9 8.881784197001252e-16)
6.8999999999999995
Note: Both C and C# has correct value, just broken printing.
Update: I am still looking for the mailing list conversation I had that lead up to this discovery.
I found this quick fix.
double i = 10 * 0.69;
System.Diagnostics.Debug.WriteLine(i);
String s = String.Format("{0:F20}", i).Substring(0,20);
System.Diagnostics.Debug.WriteLine(s + " " +s.Length );
참고URL : https://stackoverflow.com/questions/1421520/formatting-doubles-for-output-in-c-sharp
'IT TIP' 카테고리의 다른 글
JSP에서 함수 선언? (0) | 2020.11.27 |
---|---|
PHP에서 PDO로 열린 SQL 연결을 닫아야합니까? (0) | 2020.11.27 |
Java에서 바이트 범위가 -128 ~ 127 인 이유는 무엇입니까? (0) | 2020.11.27 |
.NET-WindowStyle = 숨김 대 CreateNoWindow = true? (0) | 2020.11.27 |
Html, Css 및 JavaScript를 사용하여 Android 애플리케이션 개발 (0) | 2020.11.27 |