Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Added content from FLP06-C

...

The reason for this behavior is that Linux uses the internal extended precision mode of the x87 floating-point unit (FPU) on IA-32 machines for increased accuracy during computation. When the result is stored into memory by the assignment to c, the FPU automatically rounds the result to fit into a double. The value read back from memory now compares unequally to the internal representation, which has extended precision. Windows does not use the extended precision mode, so all computation is done with double precision, and there are no differences in precision between values stored in memory and those internal to the FPU. For GCC, compiling at optimization level 1 or higher eliminates the unnecessary store into memory, so all computation happens within the FPU with extended precision [Gough 2005].

The standard constant __FLT_EPSILON__ can be used to evaluate if two floating-point values are close enough to be considered equivalent given the granularity of floating-point operations for a given implementation. __FLT_EPSILON__ represents the difference between 1 and the least value greater than 1 that is representable as a float. The granularity of a floating-point operation is determined by multiplying the operand with the larger absolute value by __FLT_EPSILON__.

Code Block
#include <math.h>
float RelDif(float a, float b) {
  float c = fabsf(a);
  float d = fabsf(b);

  d = fmaxf(c, d);

  return d == 0.0f ? 0.0f : fabsf(a - b) / d;
}

/* ... */

float a = 3.0f;
float b = 7.0f;
float c = a / b;

if (RelDif(c, a / b) <= __FLT_EPSILON__) {
  puts("Comparison succeeds");
} else {
  puts("Unexpected result");
}

On all tested platforms, this code prints

Code Block
Comparison succeeds

For double precision and long double precision floating-point values, use a similar approach using the __DBL_EPSILON__ and __LDBL_EPSILON__ constants, respectively.

Consider using numerical analysis to properly understand the numerical properties of the problem.

Risk Assessment

Failing to understand the limitations of floating-point numbers can result in unexpected computational results and exceptional conditions, possibly resulting in a violation of data integrity.

Recommendation

Severity

Likelihood

Remediation Cost

Priority

Level

FLP00-C

mediumMedium

probableProbable

highHigh

P4

L3

Automated Detection

Tool

Version

Checker

Description

ECLAIR
Include Page
ECLAIR_V
ECLAIR_V
floateqlFully implemented

Related Vulnerabilities

Search for vulnerabilities resulting from the violation of this recommendation on the CERT website.

...

...

[Gough 2005]Section 8.6, "Floating-Point Issues"
[IEEE 754 2006] 
[Hatton 1995]Section 2.7.3, "Floating-Point Misbehavior"
[Lockheed Martin 2005]AV Rule 202, Floating-point variables shall not be tested for exact equality or inequality

 

...