Subtopic Deep Dive

High-Precision Floating-Point Computation
Research Guide

What is High-Precision Floating-Point Computation?

High-Precision Floating-Point Computation develops algorithms and libraries for arbitrary-precision arithmetic exceeding IEEE 754 double precision, focusing on multiplication, summation, and integration with numerical software.

This subtopic covers multiple-precision techniques for accurate computations in ill-conditioned problems. Key works include adaptive precision methods (Shewchuk, 1997, 522 citations) and posit arithmetic as a floating-point alternative (Gustafson and Yonemoto, 2017, 349 citations). Over 10,000 citations span foundational algorithms like LSQR (Paige and Saunders, 1982, 4312 citations) to recent compression schemes.

15
Curated Papers
3
Key Challenges

Why It Matters

High-precision floating-point enables reliable solutions in physics simulations, cryptographic protocols, and number theory computations where double precision fails due to rounding errors. Shewchuk (1997) supports robust geometric predicates in computational geometry software like CGAL. Gustafson and Yonemoto (2017) demonstrate posits reducing error in financial modeling by 10-100x over floats. Lindström (2014) compresses visualization data, cutting storage 4x while preserving precision for large-scale scientific datasets.

Key Research Challenges

Performance Trade-offs

Arbitrary-precision operations slow down 10-100x compared to double precision, limiting scalability in large simulations. Paige and Saunders (1982) highlight iterative solvers like LSQR needing precision balancing for sparse systems. Gustafson and Yonemoto (2017) address this with posits achieving near-float speeds.

Error Propagation Control

Summation and multiplication accumulate rounding errors in long computations, causing instability in ill-conditioned matrices. Shewchuk (1997) introduces adaptive precision to certify predicates without exact arithmetic. Walther (1971) unifies elementary functions to minimize propagation in transcendental computations.

Hardware Integration

Software libraries like MPFR struggle with GPU acceleration and SIMD instructions. Dongarra et al. (1990) extend BLAS for high-performance but fixed-precision ops. Lindström (2014) enables fixed-rate compression for memory-bound visualization pipelines.

Essential Papers

1.

LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares

Christopher C. Paige, Michael A. Saunders · 1982 · ACM Transactions on Mathematical Software · 4.3K citations

article Free AccessLSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares Authors: Christopher C. Paige School of Computer Science, McGill University, Montreal, P.Q., Canada H3C 3G...

2.

A set of level 3 basic linear algebra subprograms

Jack Dongarra, Jeremy Du Croz, Sven Hammarling et al. · 1990 · ACM Transactions on Mathematical Software · 1.8K citations

This paper describes an extension to the set of Basic Linear Algebra Subprograms. The extensions are targeted at matrix-vector operations that should provide for efficient and portable implementati...

3.

A unified algorithm for elementary functions

John Stephen Walther · 1971 · 1.0K citations

This paper describes a single unified algorithm for the calculation of elementary functions including multiplication, division, sin, cos, tan, arctan, sinh, cosh, tanh, arctanh, In, exp and square-...

4.

Algorithm 583: LSQR: Sparse Linear Equations and Least Squares Problems

Christopher C. Paige, Michael A. Saunders · 1982 · ACM Transactions on Mathematical Software · 776 citations

article Free AccessArtifacts AvailableArtifacts Evaluated & Reusable Share on Algorithm 583: LSQR: Sparse Linear Equations and Least Squares Problems Authors: Christopher C. Paige School of Compute...

5.

Fixed-Rate Compressed Floating-Point Arrays

Peter Lindström · 2014 · IEEE Transactions on Visualization and Computer Graphics · 610 citations

Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We pres...

6.

Information-based complexity

J Traub, H Wozniakowski, I Babuska et al. · 1989 · Mathematics and Computers in Simulation · 556 citations

7.

Remark on algorithm 368: Numerical inversion of Laplace transforms

Harald Stehfest · 1970 · Communications of the ACM · 549 citations

article Free Access Share on Remark on algorithm 368: Numerical inversion of Laplace transforms Author: Harald Stehfest J.W. Goethe-Univ., Franfurt, W. Germany J.W. Goethe-Univ., Franfurt, W. Germa...

Reading Guide

Foundational Papers

Start with Paige and Saunders (1982) LSQR for iterative solvers handling precision in sparse systems (4312 citations), then Dongarra et al. (1990) BLAS for performance baselines, and Walther (1971) for elementary function algorithms.

Recent Advances

Study Gustafson and Yonemoto (2017) posits for drop-in float replacement (349 citations) and Lindström (2014) compression for data-intensive apps (610 citations).

Core Methods

Core techniques: coordinate rotation for functions (Walther, 1971), adaptive refinement (Shewchuk, 1997), tapered posit encoding (Gustafson and Yonemoto, 2017), fixed-rate compression (Lindström, 2014).

How PapersFlow Helps You Research High-Precision Floating-Point Computation

Discover & Search

Research Agent uses searchPapers('high-precision floating-point OR posit arithmetic OR adaptive precision Shewchuk') to find 250+ papers, then citationGraph on Paige and Saunders (1982, 4312 citations) reveals LSQR's influence on 5,000+ solvers, and findSimilarPapers uncovers Gustafson and Yonemoto (2017) posits.

Analyze & Verify

Analysis Agent applies readPaperContent on Shewchuk (1997) to extract adaptive precision algorithms, verifyResponse with CoVe cross-checks error bounds against Walther (1971), and runPythonAnalysis simulates posit vs IEEE float summation errors using NumPy with GRADE scoring precision guarantees.

Synthesize & Write

Synthesis Agent detects gaps in GPU-accelerated multiple-precision via contradiction flagging between Dongarra et al. (1990) BLAS and modern needs, then Writing Agent uses latexEditText for equations, latexSyncCitations for 20+ refs, and latexCompile to produce a review paper with exportMermaid diagrams of error propagation.

Use Cases

"Compare rounding errors in long summation using double vs posit arithmetic"

Research Agent → searchPapers('posit arithmetic Gustafson') → Analysis Agent → runPythonAnalysis(NumPy simulation of 1e6-term summation with posits vs floats) → output: plot and stats showing 100x error reduction

"Draft LaTeX section on adaptive precision geometric predicates"

Research Agent → readPaperContent('Shewchuk 1997') → Synthesis Agent → gap detection → Writing Agent → latexEditText + latexSyncCitations(Shewchuk/Walther) + latexCompile → output: formatted section with verified equations

"Find GitHub repos implementing LSQR high-precision variants"

Research Agent → citationGraph('Paige Saunders 1982') → Code Discovery workflow (paperExtractUrls → paperFindGithubRepo → githubRepoInspect) → output: 15 repos with precision-extended LSQR forks and performance benchmarks

Automated Workflows

Deep Research workflow scans 50+ papers on 'multiple-precision arithmetic', chains citationGraph → findSimilarPapers → structured report ranking by citations (Paige/Saunders top). DeepScan's 7-step analysis verifies Shewchuk (1997) predicates with runPythonAnalysis checkpoints and CoVe. Theorizer generates hypotheses on posit integration with BLAS from Dongarra et al. (1990).

Frequently Asked Questions

What defines high-precision floating-point computation?

Computations using more than 53-bit mantissa beyond IEEE double, via libraries like MPFR or algorithms like adaptive precision (Shewchuk, 1997).

What are key methods?

Adaptive precision predicates (Shewchuk, 1997), posit numbers (Gustafson and Yonemoto, 2017), unified elementary functions (Walther, 1971), and compressed arrays (Lindström, 2014).

What are foundational papers?

Paige and Saunders (1982) LSQR (4312 citations) for sparse solvers; Dongarra et al. (1990) Level 3 BLAS (1814 citations); Walther (1971) unified functions (1018 citations).

What open problems exist?

GPU acceleration for arbitrary precision; error-free transformations at scale; seamless integration of posits with legacy BLAS (Dongarra et al., 1990).

Research Numerical Methods and Algorithms with AI

PapersFlow provides specialized AI tools for Computer Science researchers. Here are the most relevant for this topic:

See how researchers in Computer Science & AI use PapersFlow

Field-specific workflows, example queries, and use cases.

Computer Science & AI Guide

Start Researching High-Precision Floating-Point Computation with AI

Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.

See how PapersFlow works for Computer Science researchers