Wiki Agenda Contact Version française

Proof of Numerical Programs, Floating-Point Arithmetic and Numerical Analysis

Overview

Orientation of three points

The theme of numerical programs aims at proving programs that contain computations on floating-point numbers, such as single or double precision numbers as defined by the IEEE-754 standard.

Floating-point arithmetic is primarily used as an efficient way of approximating arithmetic on real numbers. Due to its limited precision, floating-point computations may introduce inaccuracies in the numerical results. The picture on the right is an example of that. It shows the orientation of three points represented by the orange arrow. The extremal vertexes are fixed on the diagonal, while the middle point can move freely. The color (blue/green/red) at the position of the middle point shows the orientation. For instance, when this point is above the diagonal, the three points are oriented counter-clockwise, hence the color blue.

The orientation of the three points can be obtained by computing the determinant of a 2×2-matrix and looking at the sign of the result (5 subtractions, 2 multiplications). Zero means the three points are aligned; if positive, they are oriented one way; if negative, they are oriented the other way. While the computed sign seems correct overall, the lower part of the picture shows that the inaccuracies introduced by floating-point computations will produce an incorrect orientation for some positions of the middle point, in particular when the three points are almost aligned.

What Every Computer Scientist Should Know About Floating-Point Arithmetic


People

Sylvie Boldo, Guillaume Melquiond, and partly Catherine Lelay and Claude Marché.

Former contributors

Tuyen Nguyen.

Thematics

Floating-Point Arithmetic

Emulation of FMA

We are interested in verifying algorithms that perform floating-point computations to obtain intricate results. They are usually short but their purpose is not immediate from looking at their code. Indeed they are state-of-the-art devices that have been tuned beyond recognition by reordering operations, using function approximates, relying on exclusion principles, and so on, until a balance between accuracy and performance was reached. Testing is heavily used while developing such devices, but the extent of floating-point domains prevents the usage of exhaustive testing. That is why we are investigating methods and developing tools for formally verifying them and ensuring that all the behaviors have been accounted for.

We have developed a generic formalization of floating-point arithmetic that is suitable both for verifying numerical programs and for computing inside a proof assistant. We have also formally proved the correctness of several algorithms that are often encountered as basic blocks: accurate polynomial discriminant, error-free transformations, emulation of fused-multiply-add operator (FMA), and so on. The picture on the right side presents the algorithm for emulating the FMA.

While we have been successful in manually writing formal proofs, it is an approach that does not scale well: verifying lengthy floating-point algorithms soon becomes tedious and time-consuming, even for experts in formal systems. So we are also developing automated tools for verifying arithmetic facts (domains, rounding errors, and so on) and generating their formal proof.

Due to the mathematical and numerical expertise that went into the numerical devices we are interested in, only part of them will ever be in reach of purely automated tools. Manual intervention is still needed to handle the most intricate parts of the algorithm. That is why we are focusing on combining interactive proving and automated tools. Starting from an annotated program, proof obligations ensuring its correctness are generated. Hopefully, most of these obligations will be tackled automatically, possibly by tools dedicated to floating-point arithmetic. The remaining few obligations will then be handled manually inside a proof assistant; but they do not have to be solved in their entirety, they just need to be simplified sufficiently for the automated tools to take over.

Our analyses also take into account semantic changes of floating-point algorithms due to peculiarities of the computing environment, e.g. architectures and compilers. For instance, an instruction set may provide extended registers for storing floating-point values or fused-multiply-add operators. A compiler may then take advantage of these features and optimize the code accordingly. As a consequence, precision of floating-point operations at execution time no longer matches the original algorithm, hence causing discrepancies and behavior changes.


Numerical Analysis Programs

We aim at developing and applying methods to formally prove the soundness of programs from numerical analysis. It can be done in two steps: rounding error and method error. The rounding error corresponds to the accumulation of the round-off errors due to the floating-point computations of the values. We had to develop new complex methods as the usual techniques gave too large an error bound. The method error corresponds to the errors due to the discretization of the partial differential equation on a grid. The mathematical proofs from the literature have shown to be insufficient and we had to fill some holes in order to finish this formal proof. The main difficulty was the definition of the mathematical big O that must be very precise (compared to the mathematical imprecise notations) and uniform on an interval.

Numerical scheme Oscillating rope

We have worked on the 1D acoustic wave equation, meaning the oscillation of a rope attached by its two ends as shown in the left picture. This partial differential equation is approximated by a second order centered finite difference scheme, also known as the “three-point scheme.” More precisely and as explained in the right picture, the value at pjk (red dot) depends on pj-1k-1, pjk-1, pj+1k-1, and pjk-2 (blue dots).


Software or Proofs developments

Series of verified programs

See our gallery of verified programs or S. Boldo's gallery.

Related Grants

Former Grants

Main Publications

[1] Guillaume Melquiond. Floating-point arithmetic in the Coq system. Information and Computation, 216:14-23, 2012. [ bib | DOI ]
[2] Sylvie Boldo and Guillaume Melquiond. Flocq: A unified library for proving floating-point algorithms in Coq. In Elisardo Antelo, David Hough, and Paolo Ienne, editors, Proceedings of the 20th IEEE Symposium on Computer Arithmetic, pages 243-252, Tübingen, Germany, 2011. [ bib | DOI | full paper on HAL ]
[3] Florent de Dinechin, Christoph Lauter, and Guillaume Melquiond. Certifying the floating-point implementation of an elementary function using Gappa. IEEE Transactions on Computers, 60(2):242-253, 2011. [ bib | DOI | full paper on HAL ]
[4] Sylvie Boldo and Thi Minh Tuyen Nguyen. Proofs of numerical programs when the compiler optimizes. Innovations in Systems and Software Engineering, 7:151-160, 2011. [ bib ]
[5] Sylvie Boldo. Floats & Ropes: a case study for formal numerical program verification. In 36th International Colloquium on Automata, Languages and Programming, volume 5556 of Lecture Notes in Computer Science - ARCoSS, pages 91-102, Rhodos, Greece, July 2009. Springer. [ bib ]
[6] Sylvie Boldo and Guillaume Melquiond. Emulation of FMA and Correctly-Rounded Sums: Proved Algorithms Using Rounding to Odd. IEEE Transactions on Computers, 57(4):462-471, 2008. [ bib | full paper on HAL | PDF ]