fix signedness of UINT32_MAX and UINT64_MAX at the preprocessor level
authorRich Felker <dalias@aerifal.cx>
Sun, 21 Dec 2014 07:30:29 +0000 (02:30 -0500)
committerRich Felker <dalias@aerifal.cx>
Sun, 21 Dec 2014 07:30:29 +0000 (02:30 -0500)
per the rules for hexadecimal integer constants, the previous
definitions were correctly treated as having unsigned type except
possibly when used in preprocessor conditionals, where all artithmetic
takes place as intmax_t or uintmax_t. the explicit 'u' suffix ensures
that they are treated as unsigned in all contexts.

include/stdint.h

index 518d05b9d179ee8ecd22067082dbd5a7f952519e..a2968197dbe2312c46de0d41f09675b56846984c 100644 (file)
@@ -47,8 +47,8 @@ typedef uint64_t uint_least64_t;
 
 #define UINT8_MAX  (0xff)
 #define UINT16_MAX (0xffff)
-#define UINT32_MAX (0xffffffff)
-#define UINT64_MAX (0xffffffffffffffff)
+#define UINT32_MAX (0xffffffffu)
+#define UINT64_MAX (0xffffffffffffffffu)
 
 #define INT_FAST8_MIN   INT8_MIN
 #define INT_FAST64_MIN  INT64_MIN