Re: [dev] Re: json

From: Markus Wichmann <nullplan_AT_gmx.net>
Date: Sun, 16 Jun 2019 11:04:39 +0200

On Sat, Jun 15, 2019 at 09:28:05PM +0200, Mattias Andrée wrote:
> `long double` is able to exactly represent all values exactly
> representable in `uint64_t`, `int64_t` and `double` (big float
> can be used for other languages).

Not guaranteed. What exactly a long double is, is different for each
architecture. On the PC, you get an 80-bit IEEE 754 extended double
precision number (the minimum necessary to qualify for that title is 79
bits, and the Intel engineers threw in a completely useless "explicit
integer" bit in to make it a round number).

On AArch64, you get an IEEE 754 quad precision number (128 bits).
Unfortunately, in the absence of hardware support for that, any use of
the type adds large libraries to the runtime. And as of yet, I am not
aware of any hardware implementation of that type.

On PowerPC 64, you can get an IEEE 754 double-double. That is, a pair of
double precision numbers. Their sum is the actual value that is being
represented. No hardware support for that format directly, but since
there *is* hardware support for double, the library needed is a rather
thin wrapper.

I said you *can* get a double-double on PPC, because you can also use a
compiler flag to make long double equal to double. Musl for instance
requires this, as double-double is not a supported long double
representation.

On almost all other platforms (arm32 I know, the others I'm not sure
about) you get IEEE 754 double precision. Which doesn't have a large
enough significand to store all 64 bit values.

> In most cases, the
> program know what size and precision is required.
>

That leaves one problem: Big Number support. Should the JSON library
require a big number library be present, even for applications that
don't need or want it? If not, how to avoid that? With dynamic linking
that is impossible, to my knowledge. With static linking, you could get
around it.

But another, more organizational problem: If you have a JSON parser with
bignum support from another library, most applications don't want to
just load such a number, they will want to actually use it. And thus the
JSON library is suddenly mandating the bignum library to use. Worse, if
you have an application tying a JSON library and a cryptographic library
together, then both of these have to agree on the bignum library in
order to be compatible, or else the application has to link against
*two* bignum libraries (yay for duplicate code), and the application has
to translate between them. Because that's how I want my CPU cycles
spent.

Leaky abstractions are a b****.

Ciao,
Markus
Received on Sun Jun 16 2019 - 11:04:39 CEST

This archive was generated by hypermail 2.3.0 : Sun Jun 16 2019 - 11:12:08 CEST