From: Rich Felker Date: Sun, 21 Dec 2014 07:30:29 +0000 (-0500) Subject: fix signedness of UINT32_MAX and UINT64_MAX at the preprocessor level X-Git-Tag: v1.1.6~9 X-Git-Url: https://git.librecmc.org/?a=commitdiff_plain;h=dac4fc49ae3ccb40bae4ef00fb2d93027f4ee9e1;p=oweals%2Fmusl.git fix signedness of UINT32_MAX and UINT64_MAX at the preprocessor level per the rules for hexadecimal integer constants, the previous definitions were correctly treated as having unsigned type except possibly when used in preprocessor conditionals, where all artithmetic takes place as intmax_t or uintmax_t. the explicit 'u' suffix ensures that they are treated as unsigned in all contexts. --- diff --git a/include/stdint.h b/include/stdint.h index 518d05b9..a2968197 100644 --- a/include/stdint.h +++ b/include/stdint.h @@ -47,8 +47,8 @@ typedef uint64_t uint_least64_t; #define UINT8_MAX (0xff) #define UINT16_MAX (0xffff) -#define UINT32_MAX (0xffffffff) -#define UINT64_MAX (0xffffffffffffffff) +#define UINT32_MAX (0xffffffffu) +#define UINT64_MAX (0xffffffffffffffffu) #define INT_FAST8_MIN INT8_MIN #define INT_FAST64_MIN INT64_MIN