large precision values could cause out-of-bounds pointer arithmetic in
computing the precision cutoff (used to avoid expensive long-precision
arithmetic when the result will be discarded). per the C standard,
this is undefined behavior. one would expect that it works anyway, and
in fact it did in most real-world cases, but it was randomly
(depending on aslr) crashing in i386 binaries running on x86_64
kernels. this is because linux puts the userspace stack near 4GB
(instead of near 3GB) when the kernel is 64-bit, leading to the
out-of-bounds pointer arithmetic overflowing past the end of address
space and giving a very low pointer value, which then compared lower
than a pointer it should have been higher than.
the new code rearranges the arithmetic so that no overflow can occur.
while this bug could crash printf with memory corruption, it's
unlikely to have security impact in real-world applications since the
ability to provide an extremely large field precision value under
attacker-control is required to trigger the bug.
e2-=sh;
}
while (e2<0) {
- uint32_t carry=0, *z2;
+ uint32_t carry=0, *b;
int sh=MIN(9,-e2);
for (d=a; d<z; d++) {
uint32_t rm = *d & (1<<sh)-1;
if (!*a) a++;
if (carry) *z++ = carry;
/* Avoid (slow!) computation past requested precision */
- z2 = ((t|32)=='f' ? r : a) + 2 + p/9;
- z = MIN(z, z2);
+ b = (t|32)=='f' ? r : a;
+ if (z-b > 2+p/9) z = b+2+p/9;
e2+=sh;
}