when upscaling, even the very last digit is needed in cases where the
input is exact; no digits can be discarded. but when downscaling, any
digits less significant than the mantissa bits are destined for the
great bitbucket; the only influence they can have is their presence
(being nonzero). thus, we simply throw them away early. the result is
nearly a 4x performance improvement for processing huge values.
the particular threshold LD_B1B_DIG+3 is not chosen sharply; it's
simply a "safe" distance past the significant bits. it would be nice
to replace it with a sharp bound, but i suspect performance will be
comparable (within a few percent) anyway.
/* FIXME: find a way to compute optimal sh */
if (rp > 9+9*LD_B1B_DIG) sh = 9;
e2 += sh;
- for (k=a; k!=z; k=(k+1 & MASK)) {
+ for (i=0; (k=(a+i & MASK))!=z && i<LD_B1B_DIG+3; i++) {
uint32_t tmp = x[k] & (1<<sh)-1;
x[k] = (x[k]>>sh) + carry;
carry = (1000000000>>sh) * tmp;
if (k==a && !x[k]) {
a = (a+1 & MASK);
+ i--;
rp -= 9;
}
}
- if (carry) {
+ if (carry && k==z) {
if ((z+1 & MASK) != a) {
x[z] = carry;
z = (z+1 & MASK);