 
                            The C programming languages language provides the ability to use floating-point numbers for calculations. C99 specifies an overall library that The C Standard specifies requirements on a conforming implementation needs to follow for floating-point numbers , but makes few guarantees about the specific underlying floating-point representation . Because because of the preexistence existence of competing floating-point systems, C99 uses intentionally vague language when dealing with the requirements of floating point values.
By definition, a floating-point number is of finite precision and, and regardless of the underlying implementation, is prone to errors associated with rounding. (see See FLP01-AC. Take care in rearranging floating-point expressions and FLP02-AC. Consider avoiding Avoid using floating-point numbers when precise computation is needed.).
The most common floating-point system is specified by the IEEE 754 standard. An older floating-point system is the IBM floating-point representation (sometimes called IBM/370). Each of these systems has different Though a fully conforming implementation is free to create its own floating point system, there is a systems which garners overwhelming popularity and another very popular in legacy systems. The most popular implementation, used by the default options of Microsoft Visual Studio and GCC on Intel architectures, is IEEE 754. The second, generally considered legacy, system is commonly known as the "IBM floating point representation," or "IBM/370." Each of these systems have differing precisions and ranges of representable values. As a result, they do not represent all of the same values, are not binary compatible, and have differing different associated error rates.
Because of a lack of guarantees on the specifics of the underlying floating-point system, no assumptions may can be made about either precision or range. Even if code is not intended to be portable, the chosen compiler's behavior must be well understood at all compiler optimization levels.
Here is a simple illustration of precision limitations. The following code prints the decimal representation of 1/3 , to fifty 50 decimal places. Ideally, it would print fifty 3's.50 numeral 3s:
| Code Block | 
|---|
| #include <stdio.h> int main(void) { float f = 1.00f / 3.00f; printf("Float is %.50f\n", f); return 0; } | 
On 64-bit Linux, with GCC Compiler 4.1, this it produces:
| Code Block | 
|---|
| 
Float is 0.33333334326744079589843750000000000000000000000000
 | 
On 64-bit Windows XP, with Microsoft Visual C++ Compiler 9.0, this produces:Studio 2012, it produces
| Code Block | 
|---|
| 
Float is 0.33333334326744080000000000000000000000000000000000
 | 
Wiki Markup 
| Code Block | 
|---|
| 
double a = 3.0;
double b = 7.0;
double c = a / b;
if (c == a / b) {
  printf("Comparison succeeds\n");
} else {
  printf("Unexpected result\n");
}
 | 
On a test When compiled on an IA-32 Linux machine with GCC Compiler Version 3.4.4 at optimization level 1 or higher, or on an IA-64 Windows machine with Microsoft Visual Studio 2012 in Debug or Release mode, this code prints:
| Code Block | 
|---|
| UnexpectedComparison resultsucceeds | 
When compiled on a test On an IA-32 Windows XP machine with Microsoft Visual C++ Express 8.0 or on a test IA-32 Linux machine with GCC Compiler Version 3.4.4 with the -O option optimization turned off, this code prints:
| Code Block | 
|---|
| ComparisonUnexpected succeedsresult | 
Wiki Markup c}},   the   FPU   automatically   rounds   the   result   to   fit   into   a  {{double}}.  Later, when the value is read back from memory and compared to the internal representation (which has extended precision), the comparison fails, producing an unexpected result.  Windows does not use the extended precision mode, so all computation is done with double precision and there are no differences in precision between values stored in memory and those internal to the FPU.  On Linux, compiling with the {{-O}} optimization flag eliminates the unnecessary store into memory, so all computation happens within the FPU with extended precision \[[Gough 2005|AA. C References#Gough 2005]\].
...
. The value read back from memory now compares unequally to the internal representation, which has extended precision. Windows does not use the extended precision mode, so all computation is done with double precision, and there are no differences in precision between values stored in memory and those internal to the FPU. For GCC, compiling at optimization level 1 or higher eliminates the unnecessary store into memory, so all computation happens within the FPU with extended precision [Gough 2005].
The standard constant __FLT_EPSILON__ can be used to evaluate if two floating-point values are close enough to be considered equivalent given the granularity of floating-point operations for a given implementation. __FLT_EPSILON__ represents the difference between 1 and the least value greater than 1 that is representable as a float. The granularity of a floating-point operation is determined by multiplying the operand with the larger absolute value by __FLT_EPSILON__.
| Code Block | 
|---|
| #include <math.h>
float RelDif(float a, float b) {
  float c = fabsf(a);
  float d = fabsf(b);
  d = fmaxf(c, d);
  return d == 0.0f ? 0.0f : fabsf(a - b) / d;
}
/* ... */
float a = 3.0f;
float b = 7.0f;
float c = a / b;
if (RelDif(c, a / b) <= __FLT_EPSILON__) {
  puts("Comparison succeeds");
} else {
  puts("Unexpected result");
}
 | 
On all tested platforms, this code prints
| Code Block | 
|---|
| Comparison succeeds | 
For double precision and long double precision floating-point values, use a similar approach using the __DBL_EPSILON__ and __LDBL_EPSILON__ constants, respectively.
Consider using numerical analysis to properly understand the numerical properties of the problem.
Risk Assessment
Failing to understand the limitations of floating-point numbers can result in unexpected mathematical computational results and exceptional conditions, possibly resulting in a violation of data integrity.
| Recommendation | Severity | Likelihood | 
|---|
| Detectable | Repairable | Priority | Level | 
|---|---|---|---|
| FLP00- | 
| C | Medium | 
| Probable | 
| No | 
| No | P4 | L3 | 
Automated Detection
| Tool | Version | Checker | Description | ||||||
|---|---|---|---|---|---|---|---|---|---|
| CodeSonar | 
 | LANG.ARITH.FMULOFLOW LANG.ARITH.FPEQUAL | Float multiplication overflow Floating point equality | ||||||
| ECLAIR | 
 | CC2.FLP00 | Fully implemented | ||||||
| Helix QAC | 
 | C0275, C0581, C1490, C3339, | |||||||
| Parasoft C/C++test | 
 | CERT_C-FLP00-a | Floating-point expressions shall not be tested for equality or inequality | ||||||
| PC-lint Plus | 
 | 777, 9252 | Partially supported | ||||||
| 
 | CERT C: Rec. FLP00-C | Checks for absorption of float operand (rec. partially covered) | 
Related Vulnerabilities
Search for vulnerabilities resulting from the violation of this rule recommendation on the CERT website.
References
| Wiki Markup | 
|---|
| \[[Gough 2005|AA. C References#Gough 2005]\] [Section 8.6, "Floating-point issues"|http://www.network-theory.co.uk/docs/gccintro/gccintro_70.html]
\[[IEEE 754 2006|AA. C References#IEEE 754 2006]\]
\[[ISO/IEC 9899:1999|AA. C References#ISO/IEC 9899-1999]\] Section 5.2.4.2.2, "Characteristics of floating types {{<float.h>}}" | 
Related Guidelines
Bibliography
| [Gough 2005] | Section 8.6, "Floating-Point Issues" | 
| [Hatton 1995] | Section 2.7.3, "Floating-Point Misbehavior" | 
| [IEEE 754 2006] | |
| [Lockheed Martin 2005] | AV Rule 202, Floating-point variables shall not be tested for exact equality or inequality | 
...
05. Floating Point (FLP) 05. Floating Point (FLP) FLP01-A. Take care in rearranging floating point expressions