Know how Perl handles scientific notation in string to number conversions.

A recent question on Stackoverlow asked about the difference between the same floating numbers being stored in scientific notation and written out. Why does 0.76178 come out differently than 7.6178E-01 When Perl stores them, they can come out as slightly different numbers. This is related to the perlfaq answer to Why am I getting long decimals (eg, 19.9499999999999) instead of the numbers I should be getting (eg, 19.95)?, but a bit more involved. You’ll see how to skip the whole mess at the end, but be patient.

Although the perl source may seem like a scary place to go, if you know just a little C (and why don’t you?) it’s not that bad, especially for simple things like number conversions. I can tell you all sorts of stuff in Learning Perl or in the documentation, but the real answers are in the code. If you want a quick tutorial in perl innards, see Perlguts Illustrated, although you won’t need it much for this item. You might also want to read Extending and Embedding Perl.

I hadn’t previously investigated this issue, so I started as fresh as anyone else. Just what is perl doing? You’ll go a little code walk to find out.

First, get the Perl source. You can download it from CPAN or clone its git repository as shown in perlrepository.

Once you unpack the source, you should see a numeric.c in the top-level directory. That’s seems the likely candidate for things that deal with numbers. Scanning through that file (well, actually just looking at the subroutine list), you see a lot of names with atof in them. Those might click as “ASCII to *” sorts of routines, in this case, ASCII to float. In this case, you find two subroutines:

  • Perl_my_atof2
  • Perl_my_atof

Now you have your first hook into the source code. Who uses these and what do they pass as arguments? Grep the repository:

$ grep Perl_my_atof *
embed.h:#define my_atof                 Perl_my_atof
embed.h:#define my_atof2                Perl_my_atof2
embed.h:#define my_atof(a)              Perl_my_atof(aTHX_ a)
embed.h:#define my_atof2(a,b)           Perl_my_atof2(aTHX_ a,b)
global.sym:Perl_my_atof
global.sym:Perl_my_atof2
numeric.c:Perl_my_atof(pTHX_ const char* s)
numeric.c:Perl_my_atof2(pTHX_ const char* orig, NV* value)
perl.h:#   define Perl_atof(s) Perl_my_atof(s)
perl.h:#   define Perl_atof2(s, n) Perl_my_atof2(aTHX_ (s), &(n))
proto.h:PERL_CALLCONV NV        Perl_my_atof(pTHX_ const char *s)
proto.h:PERL_CALLCONV char*     Perl_my_atof2(pTHX_ const char *s, NV* value)

There’s nothing exciting there, really, but notice the lines from perl.h. Look in there to see that the source figures out who gets to implement atof. If USE_PERL_ATOF is defined (and it should be unless the person who compiled your perl turned it off with -DUSE_PERL_ATOF=0), then the Perl_atof* subroutines point to the internal ones. Otherwise, the source relies on whatever your libc provides (and that can be part of the problem if you have a broken libc):

#ifndef USE_PERL_ATOF
#   ifndef _UNICOS
#       define USE_PERL_ATOF
#   endif
#else
#   if USE_PERL_ATOF == 0
#       undef USE_PERL_ATOF
#   endif
#endif

#ifdef USE_PERL_ATOF
#   define Perl_atof(s) Perl_my_atof(s)
#   define Perl_atof2(s, n) Perl_my_atof2(aTHX_ (s), &(n))
#else
#   define Perl_atof(s) (NV)atof(s)
#   define Perl_atof2(s, n) ((n) = atof(s))
#endif

Forget about the libc version and look at Perl’s internal versions. After that entry in perl.h, embed.h has another level of indirection by getting rid of the Perl_ prefix, so if you look for my_atof:

$ grep my_atof *
embed.fnc:Ap    |NV     |my_atof        |NN const char *s
embed.fnc:Ap    |char*  |my_atof2       |NN const char *s|NN NV* value
embed.h:#define my_atof                 Perl_my_atof
embed.h:#define my_atof2                Perl_my_atof2
embed.h:#define my_atof(a)              Perl_my_atof(aTHX_ a)
embed.h:#define my_atof2(a,b)           Perl_my_atof2(aTHX_ a,b)
global.sym:Perl_my_atof
global.sym:Perl_my_atof2
numeric.c:Perl_my_atof(pTHX_ const char* s)
numeric.c:Perl_my_atof2(pTHX_ const char* orig, NV* value)
perl.h:#   define Perl_atof(s) Perl_my_atof(s)
perl.h:#   define Perl_atof2(s, n) Perl_my_atof2(aTHX_ (s), &(n))
perl.h:#define Atof                             my_atof
perl.h:#define Atof                             my_atof
proto.h:PERL_CALLCONV NV        Perl_my_atof(pTHX_ const char *s)
proto.h:PERL_CALLCONV char*     Perl_my_atof2(pTHX_ const char *s, NV* value)

Now you’ve teased out that perl.h has another level of indirection by calling it Atof. That’s how you see it in the rest of the code, mostly in sv.c, which is the code that tells scalar values how to be scalars:

$ grep Atof *
perl.h:#define Atof                             my_atof
perl.h:#define Atof                             my_atof
sv.c:       SvNV_set(sv, Atof(SvPVX_const(sv)));
sv.c:                   Atof.  */
sv.c:                                   slot set, public IOK, Atof() unneeded.
sv.c:       return I_V(Atof(SvPVX_const(sv)));
sv.c:       return U_V(Atof(SvPVX_const(sv)));
sv.c:       return Atof(SvPVX_const(sv));
sv.c:       SvNV_set(sv, Atof(SvPVX_const(sv)));
sv.c:   SvNV_set(sv, Atof(SvPVX_const(sv)));
sv.c:   sv_setnv(sv,Atof(SvPVX_const(sv)) + 1.0);
sv.c:    sv_setnv(sv,Atof(SvPVX_const(sv)) - 1.0);      /* punt */
toke.c:     nv = Atof(PL_tokenbuf);

So, when sv.c calls Atof, it’s really calling my_atof, which is really Perl_my_atof. Look at that subroutine, which is fairly straightforward C, but ignore the various types, which are Perl macros that stand in for the real thing. For instance, NV is whatever represents a number value (which is a type of scalar).

Perl_my_atof has two jobs. Either it uses the locale, does a lot of munging, then calls Perl_atof2, or it doesn’t use the locale and calls Perl_atof2 right away, which is really Perl_my_atof2. The s is the string it needs to convert, and the x and y are the numbers that result:

NV
Perl_my_atof(pTHX_ const char* s)
{
    NV x = 0.0;
#ifdef USE_LOCALE_NUMERIC
    dVAR;

    PERL_ARGS_ASSERT_MY_ATOF;

    if (PL_numeric_local && IN_LOCALE) {
	NV y;

	/* Scan the number twice; once using locale and once without;
	 * choose the larger result (in absolute value). */
	Perl_atof2(s, x);
	SET_NUMERIC_STANDARD();
	Perl_atof2(s, y);
	SET_NUMERIC_LOCAL();
	if ((y < 0.0 && y < x) || (y > 0.0 && y > x))
	    return y;
    }
    else
	Perl_atof2(s, x);
#else
    Perl_atof2(s, x);
#endif
    return x;
}

So now you need to look at Perl_my_atof2 to see what it’s doing. That is where all of the meat is. Go through it in chunks, digesting a little bit at a time. First, there is an initialization chunk:

char*
Perl_my_atof2(pTHX_ const char* orig, NV* value)
{
    NV result[3] = {0.0, 0.0, 0.0};
    const char* s = orig;
#ifdef USE_PERL_ATOF
    UV accumulator[2] = {0,0};	/* before/after dp */
    bool negative = 0;
    const char* send = s + strlen(orig) - 1;
    bool seen_digit = 0;
    I32 exp_adjust[2] = {0,0};
    I32 exp_acc[2] = {-1, -1};
    /* the current exponent adjust for the accumulators */
    I32 exponent = 0;
    I32	seen_dp  = 0;
    I32 digit = 0;
    I32 old_digit = 0;
    I32 sig_digits = 0; /* noof significant digits seen so far */

    PERL_ARGS_ASSERT_MY_ATOF2;

The next chunk is a sanity check to ensure that the number is in bounds:

/* There is no point in processing more significant digits
 * than the NV can hold. Note that NV_DIG is a lower-bound value,
 * while we need an upper-bound value. We add 2 to account for this;
 * since it will have been conservative on both the first and last digit.
 * For example a 32-bit mantissa with an exponent of 4 would have
 * exact values in the set
 *               4
 *               8
 *              ..
 *     17179869172
 *     17179869176
 *     17179869180
 *
 * where for the purposes of calculating NV_DIG we would have to discount
 * both the first and last digit, since neither can hold all values from
 * 0..9; but for calculating the value we must examine those two digits.
 */
#define MAX_SIG_DIGITS (NV_DIG+2)

/* the max number we can accumulate in a UV, and still safely do 10*N+9 */
#define MAX_ACCUMULATE ( (UV) ((UV_MAX - 9)/10))

Now the code starts scanning the string. As long as the next character is whitespace, it advances the string pointer by one (yep, "1234" looks like a number to Perl):

    /* leading whitespace */
    while (isSPACE(*s))
	++s;

After the whitespace, there might be a sign character, either - or +. If perl finds the - it remembers that it saw it by setting negative. Otherwise is advances the string pointer. It doesn’t need to remember if it saw the + and you’ll have to add that to your stringified version when you format it on the way out:

    /* sign */
    switch (*s) {
	case '-':
	    negative = 1;
	    /* fall through */
	case '+':
	    ++s;
    }

The next part checks that strtodis available. That’s the modern C equivalent of atof. It can handle the values NaN, nan (not a number) and Inf (Infinity). If it starts with an N or I, perl passes the string to strtod and if it works out, returns the result:

    /* punt to strtod for NaN/Inf; if no support for it there, tough luck */

#ifdef HAS_STRTOD
    if (*s == 'n' || *s == 'N' || *s == 'i' || *s == 'I') {
        const char *p = negative ? s - 1 : s;
        char *endp;
        NV rslt;
        rslt = strtod(p, &endp);
        if (endp != p) {
            *value = rslt;
            return (char *)endp;
        }
    }
#endif
    /* we accumulate digits into an integer; when this becomes too
     * large, we add the total to NV and start again */

    while (1) {

Now the real fun begins. As long as the next character is a digit, the code keeps track of it and it’s order of magnitude through a lot of bookkeeping which you can skip. Basically, it’s consuming digits until it runs into non-digit things such as a decimal point or the [eE] for the exponential notation:

	if (isDIGIT(*s)) {
	...
	}

Now, if the next thing is not a digit, the code follows the other branch. If the code hasn’t yet seen the decimal point (seen_dp) and GROK_NUMERIC_RADIX, which is really Perl_grok_numeric_radix, says the next character is a decimal separator, perhaps based on the current locale, then it sets seen_dp (there can only be one).

	else if (!seen_dp && GROK_NUMERIC_RADIX(&s, send)) {
	    seen_dp = 1;

If the number has more digits than the local architecture can handle, the code just eats the rest (so there’s one way to lose precision):

	    if (sig_digits > MAX_SIG_DIGITS) {
		do {
		    ++s;
		} while (isDIGIT(*s));
		break;
	    }
	}

Or, if the next thing was neither a digit nor the first decimal separator character, the processing for this section is done. That’s how Perl handles numbers like 1234fred. When it gets to f it just stops and you still get 1234.

	else {
	    break;
	}
    }

Now all that bookkeeping has to put the number together correctly:

    result[0] = S_mulexp10(result[0], exp_acc[0]) + (NV)accumulator[0];
    if (seen_dp) {
	result[1] = S_mulexp10(result[1], exp_acc[1]) + (NV)accumulator[1];
    }

The code isn’t done yet. It’s processed all the digits and the possible decimal separator. The string might still have more characters. If the number is in scientific notation, the next character will be either e or E, so it checks for that:

    if (seen_digit && (*s == 'e' || *s == 'E')) {

There are three possibilities for a valid character after the E: a -, a +, or a digit. A switch checks for the sign:

	bool expnegative = 0;
	++s;
	switch (*s) {
	    case '-':
		expnegative = 1;
		/* fall through */
	    case '+':
		++s;
	}

After figuring out the sign, it checks for digits, building up the number as it finds each digit in the single statement in the `while`. Once it has the number, it adjusts the sign. For each new digit it finds, it has to multiply by 10 what it already has then add the new digit to the exponent:

	while (isDIGIT(*s))
	    exponent = exponent * 10 + (*s++ - '0');
	if (expnegative)
	    exponent = -exponent;
    }

Once the code knows the exponent, it has to adjust the result so it can store the proper number. The exponential notation, or any other format, means nothing to computers and all of the numbers end up as bits:

    /* now apply the exponent */

    if (seen_dp) {
	result[2] = S_mulexp10(result[0],exponent+exp_adjust[0])
		+ S_mulexp10(result[1],exponent-exp_adjust[1]);
    } else {
	result[2] = S_mulexp10(result[0],exponent+exp_adjust[0]);
    }

The next interesting bit is in S_mulexp10. It does some basic work to take the number and its exponent. If the exponent is 0, it returns the value (100 = 1). If the value is 0, it returns 0.

Then there’s a lot of special case stuff which you can skip without missing out on the basic idea. The next part figures out how to apply the exponent to the digits perl found in the coefficient.

If the exponent is less than 0, it decomposes it into negative, which remembers the sign, and makes the exponent positive. At the end of this chunk of code, perl either divides or multiplies the coefficient based on the sign of the exponent. This is another place where a number can lose precision. Notice that result and power are floating point numbers. You can lose some precision merely by storing them as floats (although not the power since it is really an integer), but you can also lose some by multiplying floats.

In the for loop, it starts multiply the result by each set bit in exponent. The power value is essentially a constant and defines the base for the exponential notation. If you wanted the exponent to be powers of 37 instead of ten, that’s what you’d change.

This is actually some cool code. For every bit set in the number, you have to raise your base to that power of ten, and on each step, the value of power doubles its number of zeros:

STATIC NV
S_mulexp10(NV value, I32 exponent)
{
    NV result = 1.0;
    NV power = 10.0;
    bool negative = 0;
    I32 bit;

    if (exponent == 0)
	return value;
    if (value == 0)
	return (NV)0;

#if ((defined(VMS) && !defined(__IEEE_FP)) || defined(_UNICOS)) && defined(NV_MAX_10_EXP)
...
#endif

    if (exponent < 0) {
	negative = 1;
	exponent = -exponent;
    }
    for (bit = 1; exponent; bit <<= 1) {
	if (exponent & bit) {
	    exponent ^= bit;
	    result *= power;
	    /* Floating point exceptions are supposed to be turned off,
	     *  but if we're obviously done, don't risk another iteration.  
	     */
	     if (exponent == 0) break;
	}
	power *= power;
    }
    return negative ? value / result : value * result;
}

And there you have your answer to why a number like 0.76178 might come out differently than writing it as 7.6178E-01. The numbers are processed by different code and there are a lot more floating point operations in the exponential notation case.

Although you only went through numeric.c, the same thing happens from the bit of the source that tokenizes the source. You'll find that toke.c's Perl_scan_num also ultimately calls Atof once it collects the characters that it thinks belong to the number.

Now that you understand why it happens, how do you fix it? It's like the man who goes into the doctor saying "My arm hurts when I move it like this!" to which the doctor replies "Then don't move it like that!". If you want to avoid the floating point problems, don't use the floating point number processing. You can use bignum instead, as we show in Item 14: Handle big numbers with bignum.