1 #if 0 /* Moved to malloc.h */
2 /* ---------- To make a malloc.h, start cutting here ------------ */
5 A version of malloc/free/realloc written by Doug Lea and released to the
6 public domain. Send questions/comments/complaints/performance data
9 * VERSION 2.6.6 Sun Mar 5 19:10:03 2000 Doug Lea (dl at gee)
11 Note: There may be an updated version of this malloc obtainable at
12 ftp://g.oswego.edu/pub/misc/malloc.c
13 Check before installing!
15 * Why use this malloc?
17 This is not the fastest, most space-conserving, most portable, or
18 most tunable malloc ever written. However it is among the fastest
19 while also being among the most space-conserving, portable and tunable.
20 Consistent balance across these factors results in a good general-purpose
21 allocator. For a high-level description, see
22 http://g.oswego.edu/dl/html/malloc.html
24 * Synopsis of public routines
26 (Much fuller descriptions are contained in the program documentation below.)
29 Return a pointer to a newly allocated chunk of at least n bytes, or null
30 if no space is available.
32 Release the chunk of memory pointed to by p, or no effect if p is null.
33 realloc(Void_t* p, size_t n);
34 Return a pointer to a chunk of size n that contains the same data
35 as does chunk p up to the minimum of (n, p's size) bytes, or null
36 if no space is available. The returned pointer may or may not be
37 the same as p. If p is null, equivalent to malloc. Unless the
38 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
39 size argument of zero (re)allocates a minimum-sized chunk.
40 memalign(size_t alignment, size_t n);
41 Return a pointer to a newly allocated chunk of n bytes, aligned
42 in accord with the alignment argument, which must be a power of
45 Equivalent to memalign(pagesize, n), where pagesize is the page
46 size of the system (or as near to this as can be figured out from
47 all the includes/defines below.)
49 Equivalent to valloc(minimum-page-that-holds(n)), that is,
50 round up n to nearest pagesize.
51 calloc(size_t unit, size_t quantity);
52 Returns a pointer to quantity * unit bytes, with all locations
55 Equivalent to free(p).
56 malloc_trim(size_t pad);
57 Release all but pad bytes of freed top-most memory back
58 to the system. Return 1 if successful, else 0.
59 malloc_usable_size(Void_t* p);
60 Report the number usable allocated bytes associated with allocated
61 chunk p. This may or may not report more bytes than were requested,
62 due to alignment and minimum size constraints.
64 Prints brief summary statistics.
66 Returns (by copy) a struct containing various summary statistics.
67 mallopt(int parameter_number, int parameter_value)
68 Changes one of the tunable parameters described below. Returns
69 1 if successful in changing the parameter, else 0.
74 8 byte alignment is currently hardwired into the design. This
75 seems to suffice for all current machines and C compilers.
77 Assumed pointer representation: 4 or 8 bytes
78 Code for 8-byte pointers is untested by me but has worked
79 reliably by Wolfram Gloger, who contributed most of the
80 changes supporting this.
82 Assumed size_t representation: 4 or 8 bytes
83 Note that size_t is allowed to be 4 bytes even if pointers are 8.
85 Minimum overhead per allocated chunk: 4 or 8 bytes
86 Each malloced chunk has a hidden overhead of 4 bytes holding size
87 and status information.
89 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
90 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
92 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
93 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
94 needed; 4 (8) for a trailing size field
95 and 8 (16) bytes for free list pointers. Thus, the minimum
96 allocatable size is 16/24/32 bytes.
98 Even a request for zero bytes (i.e., malloc(0)) returns a
99 pointer to something of the minimum allocatable size.
101 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
102 8-byte size_t: 2^63 - 16 bytes
104 It is assumed that (possibly signed) size_t bit values suffice to
105 represent chunk sizes. `Possibly signed' is due to the fact
106 that `size_t' may be defined on a system as either a signed or
107 an unsigned type. To be conservative, values that would appear
108 as negative numbers are avoided.
109 Requests for sizes with a negative sign bit when the request
110 size is treaded as a long will return null.
112 Maximum overhead wastage per allocated chunk: normally 15 bytes
114 Alignnment demands, plus the minimum allocatable size restriction
115 make the normal worst-case wastage 15 bytes (i.e., up to 15
116 more bytes will be allocated than were requested in malloc), with
118 1. Because requests for zero bytes allocate non-zero space,
119 the worst case wastage for a request of zero bytes is 24 bytes.
120 2. For requests >= mmap_threshold that are serviced via
121 mmap(), the worst case wastage is 8 bytes plus the remainder
122 from a system page (the minimal mmap unit); typically 4096 bytes.
126 Here are some features that are NOT currently supported
128 * No user-definable hooks for callbacks and the like.
129 * No automated mechanism for fully checking that all accesses
130 to malloced memory stay within their bounds.
131 * No support for compaction.
133 * Synopsis of compile-time options:
135 People have reported using previous versions of this malloc on all
136 versions of Unix, sometimes by tweaking some of the defines
137 below. It has been tested most extensively on Solaris and
138 Linux. It is also reported to work on WIN32 platforms.
139 People have also reported adapting this malloc for use in
140 stand-alone embedded systems.
142 The implementation is in straight, hand-tuned ANSI C. Among other
143 consequences, it uses a lot of macros. Because of this, to be at
144 all usable, this code should be compiled using an optimizing compiler
145 (for example gcc -O2) that can simplify expressions and control
148 __STD_C (default: derived from C compiler defines)
149 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
150 a C compiler sufficiently close to ANSI to get away with it.
151 DEBUG (default: NOT defined)
152 Define to enable debugging. Adds fairly extensive assertion-based
153 checking to help track down memory errors, but noticeably slows down
155 REALLOC_ZERO_BYTES_FREES (default: NOT defined)
156 Define this if you think that realloc(p, 0) should be equivalent
157 to free(p). Otherwise, since malloc returns a unique pointer for
158 malloc(0), so does realloc(p, 0).
159 HAVE_MEMCPY (default: defined)
160 Define if you are not otherwise using ANSI STD C, but still
161 have memcpy and memset in your C library and want to use them.
162 Otherwise, simple internal versions are supplied.
163 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
164 Define as 1 if you want the C library versions of memset and
165 memcpy called in realloc and calloc (otherwise macro versions are used).
166 At least on some platforms, the simple macro versions usually
167 outperform libc versions.
168 HAVE_MMAP (default: defined as 1)
169 Define to non-zero to optionally make malloc() use mmap() to
170 allocate very large blocks.
171 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
172 Define to non-zero to optionally make realloc() use mremap() to
173 reallocate very large blocks.
174 malloc_getpagesize (default: derived from system #includes)
175 Either a constant or routine call returning the system page size.
176 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
177 Optionally define if you are on a system with a /usr/include/malloc.h
178 that declares struct mallinfo. It is not at all necessary to
179 define this even if you do, but will ensure consistency.
180 INTERNAL_SIZE_T (default: size_t)
181 Define to a 32-bit type (probably `unsigned int') if you are on a
182 64-bit machine, yet do not want or need to allow malloc requests of
183 greater than 2^31 to be handled. This saves space, especially for
185 INTERNAL_LINUX_C_LIB (default: NOT defined)
186 Defined only when compiled as part of Linux libc.
187 Also note that there is some odd internal name-mangling via defines
188 (for example, internally, `malloc' is named `mALLOc') needed
189 when compiling in this case. These look funny but don't otherwise
191 WIN32 (default: undefined)
192 Define this on MS win (95, nt) platforms to compile in sbrk emulation.
193 LACKS_UNISTD_H (default: undefined if not WIN32)
194 Define this if your system does not have a <unistd.h>.
195 LACKS_SYS_PARAM_H (default: undefined if not WIN32)
196 Define this if your system does not have a <sys/param.h>.
197 MORECORE (default: sbrk)
198 The name of the routine to call to obtain more memory from the system.
199 MORECORE_FAILURE (default: -1)
200 The value returned upon failure of MORECORE.
201 MORECORE_CLEARS (default 1)
202 True (1) if the routine mapped to MORECORE zeroes out memory (which
204 DEFAULT_TRIM_THRESHOLD
206 DEFAULT_MMAP_THRESHOLD
208 Default values of tunable parameters (described in detail below)
209 controlling interaction with host system routines (sbrk, mmap, etc).
210 These values may also be changed dynamically via mallopt(). The
211 preset defaults are those that give best performance for typical
213 USE_DL_PREFIX (default: undefined)
214 Prefix all public routines with the string 'dl'. Useful to
215 quickly avoid procedure declaration conflicts and linker symbol
216 conflicts with existing memory allocation routines.
231 #endif /*__cplusplus*/
236 #if (__STD_C || defined(WIN32))
244 #include <stddef.h> /* for size_t */
246 #include <sys/types.h>
253 #include <stdio.h> /* needed for malloc_stats */
264 Because freed chunks may be overwritten with link fields, this
265 malloc will often die when freed memory is overwritten by user
266 programs. This can be very effective (albeit in an annoying way)
267 in helping track down dangling pointers.
269 If you compile with -DDEBUG, a number of assertion checks are
270 enabled that will catch more memory errors. You probably won't be
271 able to make much sense of the actual assertion errors, but they
272 should help you locate incorrectly overwritten memory. The
273 checking is fairly extensive, and will slow down execution
274 noticeably. Calling malloc_stats or mallinfo with DEBUG set will
275 attempt to check every non-mmapped allocated and free chunk in the
276 course of computing the summmaries. (By nature, mmapped regions
277 cannot be checked very much automatically.)
279 Setting DEBUG may also be helpful if you are trying to modify
280 this code. The assertions in the check routines spell out in more
281 detail the assumptions and invariants underlying the algorithms.
288 #define assert(x) ((void)0)
293 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
294 of chunk sizes. On a 64-bit machine, you can reduce malloc
295 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
296 at the expense of not being able to handle requests greater than
297 2^31. This limitation is hardly ever a concern; you are encouraged
298 to set this. However, the default version is the same as size_t.
301 #ifndef INTERNAL_SIZE_T
302 #define INTERNAL_SIZE_T size_t
306 REALLOC_ZERO_BYTES_FREES should be set if a call to
307 realloc with zero bytes should be the same as a call to free.
308 Some people think it should. Otherwise, since this malloc
309 returns a unique pointer for malloc(0), so does realloc(p, 0).
313 /* #define REALLOC_ZERO_BYTES_FREES */
317 WIN32 causes an emulation of sbrk to be compiled in
318 mmap-based options are not currently supported in WIN32.
323 #define MORECORE wsbrk
326 #define LACKS_UNISTD_H
327 #define LACKS_SYS_PARAM_H
330 Include 'windows.h' to get the necessary declarations for the
331 Microsoft Visual C++ data structures and routines used in the 'sbrk'
334 Define WIN32_LEAN_AND_MEAN so that only the essential Microsoft
335 Visual C++ header files are included.
337 #define WIN32_LEAN_AND_MEAN
343 HAVE_MEMCPY should be defined if you are not otherwise using
344 ANSI STD C, but still have memcpy and memset in your C library
345 and want to use them in calloc and realloc. Otherwise simple
346 macro versions are defined here.
348 USE_MEMCPY should be defined as 1 if you actually want to
349 have memset and memcpy called. People report that the macro
350 versions are often enough faster than libc versions on many
351 systems that it is better to use them.
365 #if (__STD_C || defined(HAVE_MEMCPY))
368 void* memset(void*, int, size_t);
369 void* memcpy(void*, const void*, size_t);
372 /* On Win32 platforms, 'memset()' and 'memcpy()' are already declared in */
383 /* The following macros are only invoked with (2n+1)-multiples of
384 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
385 for fast inline execution when n is small. */
387 #define MALLOC_ZERO(charp, nbytes) \
389 INTERNAL_SIZE_T mzsz = (nbytes); \
390 if(mzsz <= 9*sizeof(mzsz)) { \
391 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
392 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
394 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
396 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
401 } else memset((charp), 0, mzsz); \
404 #define MALLOC_COPY(dest,src,nbytes) \
406 INTERNAL_SIZE_T mcsz = (nbytes); \
407 if(mcsz <= 9*sizeof(mcsz)) { \
408 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
409 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
410 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
411 *mcdst++ = *mcsrc++; \
412 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
413 *mcdst++ = *mcsrc++; \
414 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
415 *mcdst++ = *mcsrc++; }}} \
416 *mcdst++ = *mcsrc++; \
417 *mcdst++ = *mcsrc++; \
419 } else memcpy(dest, src, mcsz); \
422 #else /* !USE_MEMCPY */
424 /* Use Duff's device for good zeroing/copying performance. */
426 #define MALLOC_ZERO(charp, nbytes) \
428 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
429 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
430 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
432 case 0: for(;;) { *mzp++ = 0; \
433 case 7: *mzp++ = 0; \
434 case 6: *mzp++ = 0; \
435 case 5: *mzp++ = 0; \
436 case 4: *mzp++ = 0; \
437 case 3: *mzp++ = 0; \
438 case 2: *mzp++ = 0; \
439 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
443 #define MALLOC_COPY(dest,src,nbytes) \
445 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
446 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
447 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
448 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
450 case 0: for(;;) { *mcdst++ = *mcsrc++; \
451 case 7: *mcdst++ = *mcsrc++; \
452 case 6: *mcdst++ = *mcsrc++; \
453 case 5: *mcdst++ = *mcsrc++; \
454 case 4: *mcdst++ = *mcsrc++; \
455 case 3: *mcdst++ = *mcsrc++; \
456 case 2: *mcdst++ = *mcsrc++; \
457 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
465 Define HAVE_MMAP to optionally make malloc() use mmap() to
466 allocate very large blocks. These will be returned to the
467 operating system immediately after a free().
475 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
476 large blocks. This is currently only possible on Linux with
477 kernel versions newer than 1.3.77.
481 #ifdef INTERNAL_LINUX_C_LIB
482 #define HAVE_MREMAP 1
484 #define HAVE_MREMAP 0
492 #include <sys/mman.h>
494 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
495 #define MAP_ANONYMOUS MAP_ANON
498 #endif /* HAVE_MMAP */
501 Access to system page size. To the extent possible, this malloc
502 manages memory from the system in page-size units.
504 The following mechanics for getpagesize were adapted from
505 bsd/gnu getpagesize.h
508 #ifndef LACKS_UNISTD_H
512 #ifndef malloc_getpagesize
513 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
514 # ifndef _SC_PAGE_SIZE
515 # define _SC_PAGE_SIZE _SC_PAGESIZE
518 # ifdef _SC_PAGE_SIZE
519 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
521 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
522 extern size_t getpagesize();
523 # define malloc_getpagesize getpagesize()
526 # define malloc_getpagesize (4096) /* TBD: Use 'GetSystemInfo' instead */
528 # ifndef LACKS_SYS_PARAM_H
529 # include <sys/param.h>
531 # ifdef EXEC_PAGESIZE
532 # define malloc_getpagesize EXEC_PAGESIZE
536 # define malloc_getpagesize NBPG
538 # define malloc_getpagesize (NBPG * CLSIZE)
542 # define malloc_getpagesize NBPC
545 # define malloc_getpagesize PAGESIZE
547 # define malloc_getpagesize (4096) /* just guess */
560 This version of malloc supports the standard SVID/XPG mallinfo
561 routine that returns a struct containing the same kind of
562 information you can get from malloc_stats. It should work on
563 any SVID/XPG compliant system that has a /usr/include/malloc.h
564 defining struct mallinfo. (If you'd like to install such a thing
565 yourself, cut out the preliminary declarations as described above
566 and below and save them in a malloc.h file. But there's no
567 compelling reason to bother to do this.)
569 The main declaration needed is the mallinfo struct that is returned
570 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
571 bunch of fields, most of which are not even meaningful in this
572 version of malloc. Some of these fields are are instead filled by
573 mallinfo() with other numbers that might possibly be of interest.
575 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
576 /usr/include/malloc.h file that includes a declaration of struct
577 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
578 version is declared below. These must be precisely the same for
583 /* #define HAVE_USR_INCLUDE_MALLOC_H */
585 #if HAVE_USR_INCLUDE_MALLOC_H
586 #include "/usr/include/malloc.h"
589 /* SVID2/XPG mallinfo structure */
592 int arena; /* total space allocated from system */
593 int ordblks; /* number of non-inuse chunks */
594 int smblks; /* unused -- always zero */
595 int hblks; /* number of mmapped regions */
596 int hblkhd; /* total space in mmapped regions */
597 int usmblks; /* unused -- always zero */
598 int fsmblks; /* unused -- always zero */
599 int uordblks; /* total allocated space */
600 int fordblks; /* total non-inuse space */
601 int keepcost; /* top-most, releasable (via malloc_trim) space */
604 /* SVID2/XPG mallopt options */
606 #define M_MXFAST 1 /* UNUSED in this malloc */
607 #define M_NLBLKS 2 /* UNUSED in this malloc */
608 #define M_GRAIN 3 /* UNUSED in this malloc */
609 #define M_KEEP 4 /* UNUSED in this malloc */
613 /* mallopt options that actually do something */
615 #define M_TRIM_THRESHOLD -1
617 #define M_MMAP_THRESHOLD -3
618 #define M_MMAP_MAX -4
621 #ifndef DEFAULT_TRIM_THRESHOLD
622 #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
626 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
627 to keep before releasing via malloc_trim in free().
629 Automatic trimming is mainly useful in long-lived programs.
630 Because trimming via sbrk can be slow on some systems, and can
631 sometimes be wasteful (in cases where programs immediately
632 afterward allocate more large chunks) the value should be high
633 enough so that your overall system performance would improve by
636 The trim threshold and the mmap control parameters (see below)
637 can be traded off with one another. Trimming and mmapping are
638 two different ways of releasing unused memory back to the
639 system. Between these two, it is often possible to keep
640 system-level demands of a long-lived program down to a bare
641 minimum. For example, in one test suite of sessions measuring
642 the XF86 X server on Linux, using a trim threshold of 128K and a
643 mmap threshold of 192K led to near-minimal long term resource
646 If you are using this malloc in a long-lived program, it should
647 pay to experiment with these values. As a rough guide, you
648 might set to a value close to the average size of a process
649 (program) running on your system. Releasing this much memory
650 would allow such a process to run in memory. Generally, it's
651 worth it to tune for trimming rather tham memory mapping when a
652 program undergoes phases where several large chunks are
653 allocated and released in ways that can reuse each other's
654 storage, perhaps mixed with phases where there are no such
655 chunks at all. And in well-behaved long-lived programs,
656 controlling release of large blocks via trimming versus mapping
659 However, in most programs, these parameters serve mainly as
660 protection against the system-level effects of carrying around
661 massive amounts of unneeded memory. Since frequent calls to
662 sbrk, mmap, and munmap otherwise degrade performance, the default
663 parameters are set to relatively high values that serve only as
666 The default trim value is high enough to cause trimming only in
667 fairly extreme (by current memory consumption standards) cases.
668 It must be greater than page size to have any useful effect. To
669 disable trimming completely, you can set to (unsigned long)(-1);
675 #ifndef DEFAULT_TOP_PAD
676 #define DEFAULT_TOP_PAD (0)
680 M_TOP_PAD is the amount of extra `padding' space to allocate or
681 retain whenever sbrk is called. It is used in two ways internally:
683 * When sbrk is called to extend the top of the arena to satisfy
684 a new malloc request, this much padding is added to the sbrk
687 * When malloc_trim is called automatically from free(),
688 it is used as the `pad' argument.
690 In both cases, the actual amount of padding is rounded
691 so that the end of the arena is always a system page boundary.
693 The main reason for using padding is to avoid calling sbrk so
694 often. Having even a small pad greatly reduces the likelihood
695 that nearly every malloc request during program start-up (or
696 after trimming) will invoke sbrk, which needlessly wastes
699 Automatic rounding-up to page-size units is normally sufficient
700 to avoid measurable overhead, so the default is 0. However, in
701 systems where sbrk is relatively slow, it can pay to increase
702 this value, at the expense of carrying around more memory than
708 #ifndef DEFAULT_MMAP_THRESHOLD
709 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
714 M_MMAP_THRESHOLD is the request size threshold for using mmap()
715 to service a request. Requests of at least this size that cannot
716 be allocated using already-existing space will be serviced via mmap.
717 (If enough normal freed space already exists it is used instead.)
719 Using mmap segregates relatively large chunks of memory so that
720 they can be individually obtained and released from the host
721 system. A request serviced through mmap is never reused by any
722 other request (at least not directly; the system may just so
723 happen to remap successive requests to the same locations).
725 Segregating space in this way has the benefit that mmapped space
726 can ALWAYS be individually released back to the system, which
727 helps keep the system level memory demands of a long-lived
728 program low. Mapped memory can never become `locked' between
729 other chunks, as can happen with normally allocated chunks, which
730 menas that even trimming via malloc_trim would not release them.
732 However, it has the disadvantages that:
734 1. The space cannot be reclaimed, consolidated, and then
735 used to service later requests, as happens with normal chunks.
736 2. It can lead to more wastage because of mmap page alignment
738 3. It causes malloc performance to be more dependent on host
739 system memory management support routines which may vary in
740 implementation quality and may impose arbitrary
741 limitations. Generally, servicing a request via normal
742 malloc steps is faster than going through a system's mmap.
744 All together, these considerations should lead you to use mmap
745 only for relatively large requests.
751 #ifndef DEFAULT_MMAP_MAX
753 #define DEFAULT_MMAP_MAX (64)
755 #define DEFAULT_MMAP_MAX (0)
760 M_MMAP_MAX is the maximum number of requests to simultaneously
761 service using mmap. This parameter exists because:
763 1. Some systems have a limited number of internal tables for
765 2. In most systems, overreliance on mmap can degrade overall
767 3. If a program allocates many large regions, it is probably
768 better off using normal sbrk-based allocation routines that
769 can reclaim and reallocate normal heap memory. Using a
770 small value allows transition into this mode after the
771 first few allocations.
773 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
774 the default value is 0, and attempts to set it to non-zero values
775 in mallopt will fail.
780 USE_DL_PREFIX will prefix all public routines with the string 'dl'.
781 Useful to quickly avoid procedure declaration conflicts and linker
782 symbol conflicts with existing memory allocation routines.
786 /* #define USE_DL_PREFIX */
791 Special defines for linux libc
793 Except when compiled using these special defines for Linux libc
794 using weak aliases, this malloc is NOT designed to work in
795 multithreaded applications. No semaphores or other concurrency
796 control are provided to ensure that multiple malloc or free calls
797 don't run at the same time, which could be disasterous. A single
798 semaphore could be used across malloc, realloc, and free (which is
799 essentially the effect of the linux weak alias approach). It would
800 be hard to obtain finer granularity.
805 #ifdef INTERNAL_LINUX_C_LIB
809 Void_t * __default_morecore_init (ptrdiff_t);
810 Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
814 Void_t * __default_morecore_init ();
815 Void_t *(*__morecore)() = __default_morecore_init;
819 #define MORECORE (*__morecore)
820 #define MORECORE_FAILURE 0
821 #define MORECORE_CLEARS 1
823 #else /* INTERNAL_LINUX_C_LIB */
826 extern Void_t* sbrk(ptrdiff_t);
828 extern Void_t* sbrk();
832 #define MORECORE sbrk
835 #ifndef MORECORE_FAILURE
836 #define MORECORE_FAILURE -1
839 #ifndef MORECORE_CLEARS
840 #define MORECORE_CLEARS 1
843 #endif /* INTERNAL_LINUX_C_LIB */
845 #if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
847 #define cALLOc __libc_calloc
848 #define fREe __libc_free
849 #define mALLOc __libc_malloc
850 #define mEMALIGn __libc_memalign
851 #define rEALLOc __libc_realloc
852 #define vALLOc __libc_valloc
853 #define pvALLOc __libc_pvalloc
854 #define mALLINFo __libc_mallinfo
855 #define mALLOPt __libc_mallopt
857 #pragma weak calloc = __libc_calloc
858 #pragma weak free = __libc_free
859 #pragma weak cfree = __libc_free
860 #pragma weak malloc = __libc_malloc
861 #pragma weak memalign = __libc_memalign
862 #pragma weak realloc = __libc_realloc
863 #pragma weak valloc = __libc_valloc
864 #pragma weak pvalloc = __libc_pvalloc
865 #pragma weak mallinfo = __libc_mallinfo
866 #pragma weak mallopt = __libc_mallopt
871 #define cALLOc dlcalloc
873 #define mALLOc dlmalloc
874 #define mEMALIGn dlmemalign
875 #define rEALLOc dlrealloc
876 #define vALLOc dlvalloc
877 #define pvALLOc dlpvalloc
878 #define mALLINFo dlmallinfo
879 #define mALLOPt dlmallopt
880 #else /* USE_DL_PREFIX */
881 #define cALLOc calloc
883 #define mALLOc malloc
884 #define mEMALIGn memalign
885 #define rEALLOc realloc
886 #define vALLOc valloc
887 #define pvALLOc pvalloc
888 #define mALLINFo mallinfo
889 #define mALLOPt mallopt
890 #endif /* USE_DL_PREFIX */
894 /* Public routines */
898 Void_t* mALLOc(size_t);
900 Void_t* rEALLOc(Void_t*, size_t);
901 Void_t* mEMALIGn(size_t, size_t);
902 Void_t* vALLOc(size_t);
903 Void_t* pvALLOc(size_t);
904 Void_t* cALLOc(size_t, size_t);
906 int malloc_trim(size_t);
907 size_t malloc_usable_size(Void_t*);
909 int mALLOPt(int, int);
910 struct mallinfo mALLINFo(void);
921 size_t malloc_usable_size();
924 struct mallinfo mALLINFo();
929 }; /* end of extern "C" */
932 /* ---------- To make a malloc.h, end cutting here ------------ */
933 #else /* Moved to malloc.h */
938 static void malloc_update_mallinfo (void);
939 void malloc_stats (void);
941 static void malloc_update_mallinfo ();
946 #endif /* 0 */ /* Moved to malloc.h */
949 DECLARE_GLOBAL_DATA_PTR;
952 Emulation of sbrk for WIN32
953 All code within the ifdef WIN32 is untested by me.
955 Thanks to Martin Fong and others for supplying this.
961 #define AlignPage(add) (((add) + (malloc_getpagesize-1)) & \
962 ~(malloc_getpagesize-1))
963 #define AlignPage64K(add) (((add) + (0x10000 - 1)) & ~(0x10000 - 1))
965 /* resrve 64MB to insure large contiguous space */
966 #define RESERVED_SIZE (1024*1024*64)
967 #define NEXT_SIZE (2048*1024)
968 #define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
970 struct GmListElement;
971 typedef struct GmListElement GmListElement;
979 static GmListElement* head = 0;
980 static unsigned int gNextAddress = 0;
981 static unsigned int gAddressBase = 0;
982 static unsigned int gAllocatedSize = 0;
985 GmListElement* makeGmListElement (void* bas)
988 this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
1002 assert ( (head == NULL) || (head->base == (void*)gAddressBase));
1003 if (gAddressBase && (gNextAddress - gAddressBase))
1005 rval = VirtualFree ((void*)gAddressBase,
1006 gNextAddress - gAddressBase,
1012 GmListElement* next = head->next;
1013 rval = VirtualFree (head->base, 0, MEM_RELEASE);
1021 void* findRegion (void* start_address, unsigned long size)
1023 MEMORY_BASIC_INFORMATION info;
1024 if (size >= TOP_MEMORY) return NULL;
1026 while ((unsigned long)start_address + size < TOP_MEMORY)
1028 VirtualQuery (start_address, &info, sizeof (info));
1029 if ((info.State == MEM_FREE) && (info.RegionSize >= size))
1030 return start_address;
1033 /* Requested region is not available so see if the */
1034 /* next region is available. Set 'start_address' */
1035 /* to the next region and call 'VirtualQuery()' */
1038 start_address = (char*)info.BaseAddress + info.RegionSize;
1040 /* Make sure we start looking for the next region */
1041 /* on the *next* 64K boundary. Otherwise, even if */
1042 /* the new region is free according to */
1043 /* 'VirtualQuery()', the subsequent call to */
1044 /* 'VirtualAlloc()' (which follows the call to */
1045 /* this routine in 'wsbrk()') will round *down* */
1046 /* the requested address to a 64K boundary which */
1047 /* we already know is an address in the */
1048 /* unavailable region. Thus, the subsequent call */
1049 /* to 'VirtualAlloc()' will fail and bring us back */
1050 /* here, causing us to go into an infinite loop. */
1053 (void *) AlignPage64K((unsigned long) start_address);
1061 void* wsbrk (long size)
1066 if (gAddressBase == 0)
1068 gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
1069 gNextAddress = gAddressBase =
1070 (unsigned int)VirtualAlloc (NULL, gAllocatedSize,
1071 MEM_RESERVE, PAGE_NOACCESS);
1072 } else if (AlignPage (gNextAddress + size) > (gAddressBase +
1075 long new_size = max (NEXT_SIZE, AlignPage (size));
1076 void* new_address = (void*)(gAddressBase+gAllocatedSize);
1079 new_address = findRegion (new_address, new_size);
1081 if (new_address == 0)
1084 gAddressBase = gNextAddress =
1085 (unsigned int)VirtualAlloc (new_address, new_size,
1086 MEM_RESERVE, PAGE_NOACCESS);
1087 /* repeat in case of race condition */
1088 /* The region that we found has been snagged */
1089 /* by another thread */
1091 while (gAddressBase == 0);
1093 assert (new_address == (void*)gAddressBase);
1095 gAllocatedSize = new_size;
1097 if (!makeGmListElement ((void*)gAddressBase))
1100 if ((size + gNextAddress) > AlignPage (gNextAddress))
1103 res = VirtualAlloc ((void*)AlignPage (gNextAddress),
1104 (size + gNextAddress -
1105 AlignPage (gNextAddress)),
1106 MEM_COMMIT, PAGE_READWRITE);
1110 tmp = (void*)gNextAddress;
1111 gNextAddress = (unsigned int)tmp + size;
1116 unsigned int alignedGoal = AlignPage (gNextAddress + size);
1117 /* Trim by releasing the virtual memory */
1118 if (alignedGoal >= gAddressBase)
1120 VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,
1122 gNextAddress = gNextAddress + size;
1123 return (void*)gNextAddress;
1127 VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1129 gNextAddress = gAddressBase;
1135 return (void*)gNextAddress;
1148 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1149 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1150 struct malloc_chunk* fd; /* double links -- used only if free. */
1151 struct malloc_chunk* bk;
1154 typedef struct malloc_chunk* mchunkptr;
1158 malloc_chunk details:
1160 (The following includes lightly edited explanations by Colin Plumb.)
1162 Chunks of memory are maintained using a `boundary tag' method as
1163 described in e.g., Knuth or Standish. (See the paper by Paul
1164 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1165 survey of such techniques.) Sizes of free chunks are stored both
1166 in the front of each chunk and at the end. This makes
1167 consolidating fragmented chunks into bigger chunks very fast. The
1168 size fields also hold bits representing whether chunks are free or
1171 An allocated chunk looks like this:
1174 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1175 | Size of previous chunk, if allocated | |
1176 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1177 | Size of chunk, in bytes |P|
1178 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1179 | User data starts here... .
1181 . (malloc_usable_space() bytes) .
1183 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1185 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1188 Where "chunk" is the front of the chunk for the purpose of most of
1189 the malloc code, but "mem" is the pointer that is returned to the
1190 user. "Nextchunk" is the beginning of the next contiguous chunk.
1192 Chunks always begin on even word boundries, so the mem portion
1193 (which is returned to the user) is also on an even word boundary, and
1194 thus double-word aligned.
1196 Free chunks are stored in circular doubly-linked lists, and look like this:
1198 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1199 | Size of previous chunk |
1200 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1201 `head:' | Size of chunk, in bytes |P|
1202 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1203 | Forward pointer to next chunk in list |
1204 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1205 | Back pointer to previous chunk in list |
1206 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1207 | Unused space (may be 0 bytes long) .
1210 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1211 `foot:' | Size of chunk, in bytes |
1212 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1214 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1215 chunk size (which is always a multiple of two words), is an in-use
1216 bit for the *previous* chunk. If that bit is *clear*, then the
1217 word before the current chunk size contains the previous chunk
1218 size, and can be used to find the front of the previous chunk.
1219 (The very first chunk allocated always has this bit set,
1220 preventing access to non-existent (or non-owned) memory.)
1222 Note that the `foot' of the current chunk is actually represented
1223 as the prev_size of the NEXT chunk. (This makes it easier to
1224 deal with alignments etc).
1226 The two exceptions to all this are
1228 1. The special chunk `top', which doesn't bother using the
1229 trailing size field since there is no
1230 next contiguous chunk that would have to index off it. (After
1231 initialization, `top' is forced to always exist. If it would
1232 become less than MINSIZE bytes long, it is replenished via
1235 2. Chunks allocated via mmap, which have the second-lowest-order
1236 bit (IS_MMAPPED) set in their size fields. Because they are
1237 never merged or traversed from any other chunk, they have no
1238 foot size or inuse information.
1240 Available chunks are kept in any of several places (all declared below):
1242 * `av': An array of chunks serving as bin headers for consolidated
1243 chunks. Each bin is doubly linked. The bins are approximately
1244 proportionally (log) spaced. There are a lot of these bins
1245 (128). This may look excessive, but works very well in
1246 practice. All procedures maintain the invariant that no
1247 consolidated chunk physically borders another one. Chunks in
1248 bins are kept in size order, with ties going to the
1249 approximately least recently used chunk.
1251 The chunks in each bin are maintained in decreasing sorted order by
1252 size. This is irrelevant for the small bins, which all contain
1253 the same-sized chunks, but facilitates best-fit allocation for
1254 larger chunks. (These lists are just sequential. Keeping them in
1255 order almost never requires enough traversal to warrant using
1256 fancier ordered data structures.) Chunks of the same size are
1257 linked with the most recently freed at the front, and allocations
1258 are taken from the back. This results in LRU or FIFO allocation
1259 order, which tends to give each chunk an equal opportunity to be
1260 consolidated with adjacent freed chunks, resulting in larger free
1261 chunks and less fragmentation.
1263 * `top': The top-most available chunk (i.e., the one bordering the
1264 end of available memory) is treated specially. It is never
1265 included in any bin, is used only if no other chunk is
1266 available, and is released back to the system if it is very
1267 large (see M_TRIM_THRESHOLD).
1269 * `last_remainder': A bin holding only the remainder of the
1270 most recently split (non-top) chunk. This bin is checked
1271 before other non-fitting chunks, so as to provide better
1272 locality for runs of sequentially allocated chunks.
1274 * Implicitly, through the host system's memory mapping tables.
1275 If supported, requests greater than a threshold are usually
1276 serviced via calls to mmap, and then later released via munmap.
1280 /* sizes, alignments */
1282 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1283 #define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ)
1284 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1285 #define MINSIZE (sizeof(struct malloc_chunk))
1287 /* conversion from malloc headers to user pointers, and back */
1289 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1290 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1292 /* pad request bytes into a usable size */
1294 #define request2size(req) \
1295 (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1296 (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \
1297 (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1299 /* Check if m has acceptable alignment */
1301 #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1304 Physical chunk operations
1308 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1310 #define PREV_INUSE 0x1
1312 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1314 #define IS_MMAPPED 0x2
1316 /* Bits to mask off when extracting size */
1318 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1321 /* Ptr to next physical malloc_chunk. */
1323 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1325 /* Ptr to previous physical malloc_chunk */
1327 #define prev_chunk(p)\
1328 ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1331 /* Treat space at ptr + offset as a chunk */
1333 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1336 Dealing with use bits
1339 /* extract p's inuse bit */
1342 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1344 /* extract inuse bit of previous chunk */
1346 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1348 /* check for mmap()'ed chunk */
1350 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1352 /* set/clear chunk as in use without otherwise disturbing */
1354 #define set_inuse(p)\
1355 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1357 #define clear_inuse(p)\
1358 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1360 /* check/set/clear inuse bits in known places */
1362 #define inuse_bit_at_offset(p, s)\
1363 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1365 #define set_inuse_bit_at_offset(p, s)\
1366 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1368 #define clear_inuse_bit_at_offset(p, s)\
1369 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1372 Dealing with size fields
1375 /* Get size, ignoring use bits */
1377 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1379 /* Set size at head, without disturbing its use bit */
1381 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1383 /* Set size/use ignoring previous bits in header */
1385 #define set_head(p, s) ((p)->size = (s))
1387 /* Set size at footer (only when chunk is not in use) */
1389 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1394 The bins, `av_' are an array of pairs of pointers serving as the
1395 heads of (initially empty) doubly-linked lists of chunks, laid out
1396 in a way so that each pair can be treated as if it were in a
1397 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1398 and chunks are the same).
1400 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1401 8 bytes apart. Larger bins are approximately logarithmically
1402 spaced. (See the table below.) The `av_' array is never mentioned
1403 directly in the code, but instead via bin access macros.
1411 4 bins of size 32768
1412 2 bins of size 262144
1413 1 bin of size what's left
1415 There is actually a little bit of slop in the numbers in bin_index
1416 for the sake of speed. This makes no difference elsewhere.
1418 The special chunks `top' and `last_remainder' get their own bins,
1419 (this is implemented via yet more trickery with the av_ array),
1420 although `top' is never properly linked to its bin since it is
1421 always handled specially.
1425 #define NAV 128 /* number of bins */
1427 typedef struct malloc_chunk* mbinptr;
1431 #define bin_at(i) ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1432 #define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1433 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1436 The first 2 bins are never indexed. The corresponding av_ cells are instead
1437 used for bookkeeping. This is not to save space, but to simplify
1438 indexing, maintain locality, and avoid some initialization tests.
1441 #define top (bin_at(0)->fd) /* The topmost chunk */
1442 #define last_remainder (bin_at(1)) /* remainder from last split */
1446 Because top initially points to its own bin with initial
1447 zero size, thus forcing extension on the first malloc request,
1448 we avoid having any special code in malloc to check whether
1449 it even exists yet. But we still need to in malloc_extend_top.
1452 #define initial_top ((mchunkptr)(bin_at(0)))
1454 /* Helper macro to initialize bins */
1456 #define IAV(i) bin_at(i), bin_at(i)
1458 static mbinptr av_[NAV * 2 + 2] = {
1460 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1461 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1462 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1463 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1464 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1465 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1466 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1467 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1468 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1469 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1470 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1471 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1472 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1473 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1474 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1475 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1478 void malloc_bin_reloc (void)
1480 unsigned long *p = (unsigned long *)(&av_[2]);
1482 for (i=2; i<(sizeof(av_)/sizeof(mbinptr)); ++i) {
1483 *p++ += gd->reloc_off;
1487 /* field-extraction macros */
1489 #define first(b) ((b)->fd)
1490 #define last(b) ((b)->bk)
1496 #define bin_index(sz) \
1497 (((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3): \
1498 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6): \
1499 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9): \
1500 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12): \
1501 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15): \
1502 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1505 bins for chunks < 512 are all spaced 8 bytes apart, and hold
1506 identically sized chunks. This is exploited in malloc.
1509 #define MAX_SMALLBIN 63
1510 #define MAX_SMALLBIN_SIZE 512
1511 #define SMALLBIN_WIDTH 8
1513 #define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
1516 Requests are `small' if both the corresponding and the next bin are small
1519 #define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1522 To help compensate for the large number of bins, a one-level index
1523 structure is used for bin-by-bin searching. `binblocks' is a
1524 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1525 have any (possibly) non-empty bins, so they can be skipped over
1526 all at once during during traversals. The bits are NOT always
1527 cleared as soon as all bins in a block are empty, but instead only
1528 when all are noticed to be empty during traversal in malloc.
1531 #define BINBLOCKWIDTH 4 /* bins per block */
1533 #define binblocks (bin_at(0)->size) /* bitvector of nonempty blocks */
1535 /* bin<->block macros */
1537 #define idx2binblock(ix) ((unsigned)1 << (ix / BINBLOCKWIDTH))
1538 #define mark_binblock(ii) (binblocks |= idx2binblock(ii))
1539 #define clear_binblock(ii) (binblocks &= ~(idx2binblock(ii)))
1541 /* Other static bookkeeping data */
1543 /* variables holding tunable values */
1545 static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1546 static unsigned long top_pad = DEFAULT_TOP_PAD;
1547 static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1548 static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1550 /* The first value returned from sbrk */
1551 static char* sbrk_base = (char*)(-1);
1553 /* The maximum memory obtained from system via sbrk */
1554 static unsigned long max_sbrked_mem = 0;
1556 /* The maximum via either sbrk or mmap */
1557 static unsigned long max_total_mem = 0;
1559 /* internal working copy of mallinfo */
1560 static struct mallinfo current_mallinfo = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1562 /* The total memory obtained from system via sbrk */
1563 #define sbrked_mem (current_mallinfo.arena)
1565 /* Tracking mmaps */
1568 static unsigned int n_mmaps = 0;
1570 static unsigned long mmapped_mem = 0;
1572 static unsigned int max_n_mmaps = 0;
1573 static unsigned long max_mmapped_mem = 0;
1584 These routines make a number of assertions about the states
1585 of data structures that should be true at all times. If any
1586 are not true, it's very likely that a user program has somehow
1587 trashed memory. (It's also possible that there is a coding error
1588 in malloc. In which case, please report it!)
1592 static void do_check_chunk(mchunkptr p)
1594 static void do_check_chunk(p) mchunkptr p;
1597 #if 0 /* causes warnings because assert() is off */
1598 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1601 /* No checkable chunk is mmapped */
1602 assert(!chunk_is_mmapped(p));
1604 /* Check for legal address ... */
1605 assert((char*)p >= sbrk_base);
1607 assert((char*)p + sz <= (char*)top);
1609 assert((char*)p + sz <= sbrk_base + sbrked_mem);
1615 static void do_check_free_chunk(mchunkptr p)
1617 static void do_check_free_chunk(p) mchunkptr p;
1620 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1621 #if 0 /* causes warnings because assert() is off */
1622 mchunkptr next = chunk_at_offset(p, sz);
1627 /* Check whether it claims to be free ... */
1630 /* Unless a special marker, must have OK fields */
1631 if ((long)sz >= (long)MINSIZE)
1633 assert((sz & MALLOC_ALIGN_MASK) == 0);
1634 assert(aligned_OK(chunk2mem(p)));
1635 /* ... matching footer field */
1636 assert(next->prev_size == sz);
1637 /* ... and is fully consolidated */
1638 assert(prev_inuse(p));
1639 assert (next == top || inuse(next));
1641 /* ... and has minimally sane links */
1642 assert(p->fd->bk == p);
1643 assert(p->bk->fd == p);
1645 else /* markers are always of size SIZE_SZ */
1646 assert(sz == SIZE_SZ);
1650 static void do_check_inuse_chunk(mchunkptr p)
1652 static void do_check_inuse_chunk(p) mchunkptr p;
1655 mchunkptr next = next_chunk(p);
1658 /* Check whether it claims to be in use ... */
1661 /* ... and is surrounded by OK chunks.
1662 Since more things can be checked with free chunks than inuse ones,
1663 if an inuse chunk borders them and debug is on, it's worth doing them.
1667 mchunkptr prv = prev_chunk(p);
1668 assert(next_chunk(prv) == p);
1669 do_check_free_chunk(prv);
1673 assert(prev_inuse(next));
1674 assert(chunksize(next) >= MINSIZE);
1676 else if (!inuse(next))
1677 do_check_free_chunk(next);
1682 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
1684 static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
1687 #if 0 /* causes warnings because assert() is off */
1688 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1692 do_check_inuse_chunk(p);
1694 /* Legal size ... */
1695 assert((long)sz >= (long)MINSIZE);
1696 assert((sz & MALLOC_ALIGN_MASK) == 0);
1698 assert(room < (long)MINSIZE);
1700 /* ... and alignment */
1701 assert(aligned_OK(chunk2mem(p)));
1704 /* ... and was allocated at front of an available chunk */
1705 assert(prev_inuse(p));
1710 #define check_free_chunk(P) do_check_free_chunk(P)
1711 #define check_inuse_chunk(P) do_check_inuse_chunk(P)
1712 #define check_chunk(P) do_check_chunk(P)
1713 #define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
1715 #define check_free_chunk(P)
1716 #define check_inuse_chunk(P)
1717 #define check_chunk(P)
1718 #define check_malloced_chunk(P,N)
1722 Macro-based internal utilities
1727 Linking chunks in bin lists.
1728 Call these only with variables, not arbitrary expressions, as arguments.
1732 Place chunk p of size s in its bin, in size order,
1733 putting it ahead of others of same size.
1737 #define frontlink(P, S, IDX, BK, FD) \
1739 if (S < MAX_SMALLBIN_SIZE) \
1741 IDX = smallbin_index(S); \
1742 mark_binblock(IDX); \
1747 FD->bk = BK->fd = P; \
1751 IDX = bin_index(S); \
1754 if (FD == BK) mark_binblock(IDX); \
1757 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
1762 FD->bk = BK->fd = P; \
1767 /* take a chunk off a list */
1769 #define unlink(P, BK, FD) \
1777 /* Place p as the last remainder */
1779 #define link_last_remainder(P) \
1781 last_remainder->fd = last_remainder->bk = P; \
1782 P->fd = P->bk = last_remainder; \
1785 /* Clear the last_remainder bin */
1787 #define clear_last_remainder \
1788 (last_remainder->fd = last_remainder->bk = last_remainder)
1790 /* Routines dealing with mmap(). */
1795 static mchunkptr mmap_chunk(size_t size)
1797 static mchunkptr mmap_chunk(size) size_t size;
1800 size_t page_mask = malloc_getpagesize - 1;
1803 #ifndef MAP_ANONYMOUS
1807 if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1809 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1810 * there is no following chunk whose prev_size field could be used.
1812 size = (size + SIZE_SZ + page_mask) & ~page_mask;
1814 #ifdef MAP_ANONYMOUS
1815 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
1816 MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
1817 #else /* !MAP_ANONYMOUS */
1820 fd = open("/dev/zero", O_RDWR);
1821 if(fd < 0) return 0;
1823 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
1826 if(p == (mchunkptr)-1) return 0;
1829 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1831 /* We demand that eight bytes into a page must be 8-byte aligned. */
1832 assert(aligned_OK(chunk2mem(p)));
1834 /* The offset to the start of the mmapped region is stored
1835 * in the prev_size field of the chunk; normally it is zero,
1836 * but that can be changed in memalign().
1839 set_head(p, size|IS_MMAPPED);
1841 mmapped_mem += size;
1842 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1843 max_mmapped_mem = mmapped_mem;
1844 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1845 max_total_mem = mmapped_mem + sbrked_mem;
1850 static void munmap_chunk(mchunkptr p)
1852 static void munmap_chunk(p) mchunkptr p;
1855 INTERNAL_SIZE_T size = chunksize(p);
1858 assert (chunk_is_mmapped(p));
1859 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1860 assert((n_mmaps > 0));
1861 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1864 mmapped_mem -= (size + p->prev_size);
1866 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1868 /* munmap returns non-zero on failure */
1875 static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
1877 static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1880 size_t page_mask = malloc_getpagesize - 1;
1881 INTERNAL_SIZE_T offset = p->prev_size;
1882 INTERNAL_SIZE_T size = chunksize(p);
1885 assert (chunk_is_mmapped(p));
1886 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1887 assert((n_mmaps > 0));
1888 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1890 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1891 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1893 cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
1895 if (cp == (char *)-1) return 0;
1897 p = (mchunkptr)(cp + offset);
1899 assert(aligned_OK(chunk2mem(p)));
1901 assert((p->prev_size == offset));
1902 set_head(p, (new_size - offset)|IS_MMAPPED);
1904 mmapped_mem -= size + offset;
1905 mmapped_mem += new_size;
1906 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1907 max_mmapped_mem = mmapped_mem;
1908 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1909 max_total_mem = mmapped_mem + sbrked_mem;
1913 #endif /* HAVE_MREMAP */
1915 #endif /* HAVE_MMAP */
1918 Extend the top-most chunk by obtaining memory from system.
1919 Main interface to sbrk (but see also malloc_trim).
1923 static void malloc_extend_top(INTERNAL_SIZE_T nb)
1925 static void malloc_extend_top(nb) INTERNAL_SIZE_T nb;
1928 char* brk; /* return value from sbrk */
1929 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
1930 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
1931 char* new_brk; /* return of 2nd sbrk call */
1932 INTERNAL_SIZE_T top_size; /* new size of top chunk */
1934 mchunkptr old_top = top; /* Record state of old top */
1935 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
1936 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
1938 /* Pad request with top_pad plus minimal overhead */
1940 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
1941 unsigned long pagesz = malloc_getpagesize;
1943 /* If not the first time through, round to preserve page boundary */
1944 /* Otherwise, we need to correct to a page size below anyway. */
1945 /* (We also correct below if an intervening foreign sbrk call.) */
1947 if (sbrk_base != (char*)(-1))
1948 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
1950 brk = (char*)(MORECORE (sbrk_size));
1952 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
1953 if (brk == (char*)(MORECORE_FAILURE) ||
1954 (brk < old_end && old_top != initial_top))
1957 sbrked_mem += sbrk_size;
1959 if (brk == old_end) /* can just add bytes to current top */
1961 top_size = sbrk_size + old_top_size;
1962 set_head(top, top_size | PREV_INUSE);
1966 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
1968 else /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
1969 sbrked_mem += brk - (char*)old_end;
1971 /* Guarantee alignment of first new chunk made from this space */
1972 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
1973 if (front_misalign > 0)
1975 correction = (MALLOC_ALIGNMENT) - front_misalign;
1981 /* Guarantee the next brk will be at a page boundary */
1983 correction += ((((unsigned long)(brk + sbrk_size))+(pagesz-1)) &
1984 ~(pagesz - 1)) - ((unsigned long)(brk + sbrk_size));
1986 /* Allocate correction */
1987 new_brk = (char*)(MORECORE (correction));
1988 if (new_brk == (char*)(MORECORE_FAILURE)) return;
1990 sbrked_mem += correction;
1992 top = (mchunkptr)brk;
1993 top_size = new_brk - brk + correction;
1994 set_head(top, top_size | PREV_INUSE);
1996 if (old_top != initial_top)
1999 /* There must have been an intervening foreign sbrk call. */
2000 /* A double fencepost is necessary to prevent consolidation */
2002 /* If not enough space to do this, then user did something very wrong */
2003 if (old_top_size < MINSIZE)
2005 set_head(top, PREV_INUSE); /* will force null return from malloc */
2009 /* Also keep size a multiple of MALLOC_ALIGNMENT */
2010 old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
2011 set_head_size(old_top, old_top_size);
2012 chunk_at_offset(old_top, old_top_size )->size =
2014 chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
2016 /* If possible, release the rest. */
2017 if (old_top_size >= MINSIZE)
2018 fREe(chunk2mem(old_top));
2022 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2023 max_sbrked_mem = sbrked_mem;
2024 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2025 max_total_mem = mmapped_mem + sbrked_mem;
2027 /* We always land on a page boundary */
2028 assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);
2031 /* Main public routines */
2037 The requested size is first converted into a usable form, `nb'.
2038 This currently means to add 4 bytes overhead plus possibly more to
2039 obtain 8-byte alignment and/or to obtain a size of at least
2040 MINSIZE (currently 16 bytes), the smallest allocatable size.
2041 (All fits are considered `exact' if they are within MINSIZE bytes.)
2043 From there, the first successful of the following steps is taken:
2045 1. The bin corresponding to the request size is scanned, and if
2046 a chunk of exactly the right size is found, it is taken.
2048 2. The most recently remaindered chunk is used if it is big
2049 enough. This is a form of (roving) first fit, used only in
2050 the absence of exact fits. Runs of consecutive requests use
2051 the remainder of the chunk used for the previous such request
2052 whenever possible. This limited use of a first-fit style
2053 allocation strategy tends to give contiguous chunks
2054 coextensive lifetimes, which improves locality and can reduce
2055 fragmentation in the long run.
2057 3. Other bins are scanned in increasing size order, using a
2058 chunk big enough to fulfill the request, and splitting off
2059 any remainder. This search is strictly by best-fit; i.e.,
2060 the smallest (with ties going to approximately the least
2061 recently used) chunk that fits is selected.
2063 4. If large enough, the chunk bordering the end of memory
2064 (`top') is split off. (This use of `top' is in accord with
2065 the best-fit search rule. In effect, `top' is treated as
2066 larger (and thus less well fitting) than any other available
2067 chunk since it can be extended to be as large as necessary
2068 (up to system limitations).
2070 5. If the request size meets the mmap threshold and the
2071 system supports mmap, and there are few enough currently
2072 allocated mmapped regions, and a call to mmap succeeds,
2073 the request is allocated via direct memory mapping.
2075 6. Otherwise, the top of memory is extended by
2076 obtaining more space from the system (normally using sbrk,
2077 but definable to anything else via the MORECORE macro).
2078 Memory is gathered from the system (in system page-sized
2079 units) in a way that allows chunks obtained across different
2080 sbrk calls to be consolidated, but does not require
2081 contiguous memory. Thus, it should be safe to intersperse
2082 mallocs with other sbrk calls.
2085 All allocations are made from the the `lowest' part of any found
2086 chunk. (The implementation invariant is that prev_inuse is
2087 always true of any allocated chunk; i.e., that each allocated
2088 chunk borders either a previously allocated and still in-use chunk,
2089 or the base of its memory arena.)
2094 Void_t* mALLOc(size_t bytes)
2096 Void_t* mALLOc(bytes) size_t bytes;
2099 mchunkptr victim; /* inspected/selected chunk */
2100 INTERNAL_SIZE_T victim_size; /* its size */
2101 int idx; /* index for bin traversal */
2102 mbinptr bin; /* associated bin */
2103 mchunkptr remainder; /* remainder from a split */
2104 long remainder_size; /* its size */
2105 int remainder_index; /* its bin index */
2106 unsigned long block; /* block traverser bit */
2107 int startidx; /* first bin of a traversed block */
2108 mchunkptr fwd; /* misc temp for linking */
2109 mchunkptr bck; /* misc temp for linking */
2110 mbinptr q; /* misc temp */
2114 if ((long)bytes < 0) return 0;
2116 nb = request2size(bytes); /* padded request size; */
2118 /* Check for exact match in a bin */
2120 if (is_small_request(nb)) /* Faster version for small requests */
2122 idx = smallbin_index(nb);
2124 /* No traversal or size check necessary for small bins. */
2129 /* Also scan the next one, since it would have a remainder < MINSIZE */
2137 victim_size = chunksize(victim);
2138 unlink(victim, bck, fwd);
2139 set_inuse_bit_at_offset(victim, victim_size);
2140 check_malloced_chunk(victim, nb);
2141 return chunk2mem(victim);
2144 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2149 idx = bin_index(nb);
2152 for (victim = last(bin); victim != bin; victim = victim->bk)
2154 victim_size = chunksize(victim);
2155 remainder_size = victim_size - nb;
2157 if (remainder_size >= (long)MINSIZE) /* too big */
2159 --idx; /* adjust to rescan below after checking last remainder */
2163 else if (remainder_size >= 0) /* exact fit */
2165 unlink(victim, bck, fwd);
2166 set_inuse_bit_at_offset(victim, victim_size);
2167 check_malloced_chunk(victim, nb);
2168 return chunk2mem(victim);
2176 /* Try to use the last split-off remainder */
2178 if ( (victim = last_remainder->fd) != last_remainder)
2180 victim_size = chunksize(victim);
2181 remainder_size = victim_size - nb;
2183 if (remainder_size >= (long)MINSIZE) /* re-split */
2185 remainder = chunk_at_offset(victim, nb);
2186 set_head(victim, nb | PREV_INUSE);
2187 link_last_remainder(remainder);
2188 set_head(remainder, remainder_size | PREV_INUSE);
2189 set_foot(remainder, remainder_size);
2190 check_malloced_chunk(victim, nb);
2191 return chunk2mem(victim);
2194 clear_last_remainder;
2196 if (remainder_size >= 0) /* exhaust */
2198 set_inuse_bit_at_offset(victim, victim_size);
2199 check_malloced_chunk(victim, nb);
2200 return chunk2mem(victim);
2203 /* Else place in bin */
2205 frontlink(victim, victim_size, remainder_index, bck, fwd);
2209 If there are any possibly nonempty big-enough blocks,
2210 search for best fitting chunk by scanning bins in blockwidth units.
2213 if ( (block = idx2binblock(idx)) <= binblocks)
2216 /* Get to the first marked block */
2218 if ( (block & binblocks) == 0)
2220 /* force to an even block boundary */
2221 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2223 while ((block & binblocks) == 0)
2225 idx += BINBLOCKWIDTH;
2230 /* For each possibly nonempty block ... */
2233 startidx = idx; /* (track incomplete blocks) */
2234 q = bin = bin_at(idx);
2236 /* For each bin in this block ... */
2239 /* Find and use first big enough chunk ... */
2241 for (victim = last(bin); victim != bin; victim = victim->bk)
2243 victim_size = chunksize(victim);
2244 remainder_size = victim_size - nb;
2246 if (remainder_size >= (long)MINSIZE) /* split */
2248 remainder = chunk_at_offset(victim, nb);
2249 set_head(victim, nb | PREV_INUSE);
2250 unlink(victim, bck, fwd);
2251 link_last_remainder(remainder);
2252 set_head(remainder, remainder_size | PREV_INUSE);
2253 set_foot(remainder, remainder_size);
2254 check_malloced_chunk(victim, nb);
2255 return chunk2mem(victim);
2258 else if (remainder_size >= 0) /* take */
2260 set_inuse_bit_at_offset(victim, victim_size);
2261 unlink(victim, bck, fwd);
2262 check_malloced_chunk(victim, nb);
2263 return chunk2mem(victim);
2268 bin = next_bin(bin);
2270 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2272 /* Clear out the block bit. */
2274 do /* Possibly backtrack to try to clear a partial block */
2276 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2278 binblocks &= ~block;
2283 } while (first(q) == q);
2285 /* Get to the next possibly nonempty block */
2287 if ( (block <<= 1) <= binblocks && (block != 0) )
2289 while ((block & binblocks) == 0)
2291 idx += BINBLOCKWIDTH;
2301 /* Try to use top chunk */
2303 /* Require that there be a remainder, ensuring top always exists */
2304 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2308 /* If big and would otherwise need to extend, try to use mmap instead */
2309 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2310 (victim = mmap_chunk(nb)) != 0)
2311 return chunk2mem(victim);
2315 malloc_extend_top(nb);
2316 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2317 return 0; /* propagate failure */
2321 set_head(victim, nb | PREV_INUSE);
2322 top = chunk_at_offset(victim, nb);
2323 set_head(top, remainder_size | PREV_INUSE);
2324 check_malloced_chunk(victim, nb);
2325 return chunk2mem(victim);
2335 1. free(0) has no effect.
2337 2. If the chunk was allocated via mmap, it is release via munmap().
2339 3. If a returned chunk borders the current high end of memory,
2340 it is consolidated into the top, and if the total unused
2341 topmost memory exceeds the trim threshold, malloc_trim is
2344 4. Other chunks are consolidated as they arrive, and
2345 placed in corresponding bins. (This includes the case of
2346 consolidating with the current `last_remainder').
2352 void fREe(Void_t* mem)
2354 void fREe(mem) Void_t* mem;
2357 mchunkptr p; /* chunk corresponding to mem */
2358 INTERNAL_SIZE_T hd; /* its head field */
2359 INTERNAL_SIZE_T sz; /* its size */
2360 int idx; /* its bin index */
2361 mchunkptr next; /* next contiguous chunk */
2362 INTERNAL_SIZE_T nextsz; /* its size */
2363 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2364 mchunkptr bck; /* misc temp for linking */
2365 mchunkptr fwd; /* misc temp for linking */
2366 int islr; /* track whether merging with last_remainder */
2368 if (mem == 0) /* free(0) has no effect */
2375 if (hd & IS_MMAPPED) /* release mmapped memory. */
2382 check_inuse_chunk(p);
2384 sz = hd & ~PREV_INUSE;
2385 next = chunk_at_offset(p, sz);
2386 nextsz = chunksize(next);
2388 if (next == top) /* merge with top */
2392 if (!(hd & PREV_INUSE)) /* consolidate backward */
2394 prevsz = p->prev_size;
2395 p = chunk_at_offset(p, -((long) prevsz));
2397 unlink(p, bck, fwd);
2400 set_head(p, sz | PREV_INUSE);
2402 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
2403 malloc_trim(top_pad);
2407 set_head(next, nextsz); /* clear inuse bit */
2411 if (!(hd & PREV_INUSE)) /* consolidate backward */
2413 prevsz = p->prev_size;
2414 p = chunk_at_offset(p, -((long) prevsz));
2417 if (p->fd == last_remainder) /* keep as last_remainder */
2420 unlink(p, bck, fwd);
2423 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
2427 if (!islr && next->fd == last_remainder) /* re-insert last_remainder */
2430 link_last_remainder(p);
2433 unlink(next, bck, fwd);
2437 set_head(p, sz | PREV_INUSE);
2440 frontlink(p, sz, idx, bck, fwd);
2447 Chunks that were obtained via mmap cannot be extended or shrunk
2448 unless HAVE_MREMAP is defined, in which case mremap is used.
2449 Otherwise, if their reallocation is for additional space, they are
2450 copied. If for less, they are just left alone.
2452 Otherwise, if the reallocation is for additional space, and the
2453 chunk can be extended, it is, else a malloc-copy-free sequence is
2454 taken. There are several different ways that a chunk could be
2455 extended. All are tried:
2457 * Extending forward into following adjacent free chunk.
2458 * Shifting backwards, joining preceding adjacent space
2459 * Both shifting backwards and extending forward.
2460 * Extending into newly sbrked space
2462 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2463 size argument of zero (re)allocates a minimum-sized chunk.
2465 If the reallocation is for less space, and the new request is for
2466 a `small' (<512 bytes) size, then the newly unused space is lopped
2469 The old unix realloc convention of allowing the last-free'd chunk
2470 to be used as an argument to realloc is no longer supported.
2471 I don't know of any programs still relying on this feature,
2472 and allowing it would also allow too many other incorrect
2473 usages of realloc to be sensible.
2480 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
2482 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
2485 INTERNAL_SIZE_T nb; /* padded request size */
2487 mchunkptr oldp; /* chunk corresponding to oldmem */
2488 INTERNAL_SIZE_T oldsize; /* its size */
2490 mchunkptr newp; /* chunk to return */
2491 INTERNAL_SIZE_T newsize; /* its size */
2492 Void_t* newmem; /* corresponding user mem */
2494 mchunkptr next; /* next contiguous chunk after oldp */
2495 INTERNAL_SIZE_T nextsize; /* its size */
2497 mchunkptr prev; /* previous contiguous chunk before oldp */
2498 INTERNAL_SIZE_T prevsize; /* its size */
2500 mchunkptr remainder; /* holds split off extra space from newp */
2501 INTERNAL_SIZE_T remainder_size; /* its size */
2503 mchunkptr bck; /* misc temp for linking */
2504 mchunkptr fwd; /* misc temp for linking */
2506 #ifdef REALLOC_ZERO_BYTES_FREES
2507 if (bytes == 0) { fREe(oldmem); return 0; }
2510 if ((long)bytes < 0) return 0;
2512 /* realloc of null is supposed to be same as malloc */
2513 if (oldmem == 0) return mALLOc(bytes);
2515 newp = oldp = mem2chunk(oldmem);
2516 newsize = oldsize = chunksize(oldp);
2519 nb = request2size(bytes);
2522 if (chunk_is_mmapped(oldp))
2525 newp = mremap_chunk(oldp, nb);
2526 if(newp) return chunk2mem(newp);
2528 /* Note the extra SIZE_SZ overhead. */
2529 if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
2530 /* Must alloc, copy, free. */
2531 newmem = mALLOc(bytes);
2532 if (newmem == 0) return 0; /* propagate failure */
2533 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2539 check_inuse_chunk(oldp);
2541 if ((long)(oldsize) < (long)(nb))
2544 /* Try expanding forward */
2546 next = chunk_at_offset(oldp, oldsize);
2547 if (next == top || !inuse(next))
2549 nextsize = chunksize(next);
2551 /* Forward into top only if a remainder */
2554 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2556 newsize += nextsize;
2557 top = chunk_at_offset(oldp, nb);
2558 set_head(top, (newsize - nb) | PREV_INUSE);
2559 set_head_size(oldp, nb);
2560 return chunk2mem(oldp);
2564 /* Forward into next chunk */
2565 else if (((long)(nextsize + newsize) >= (long)(nb)))
2567 unlink(next, bck, fwd);
2568 newsize += nextsize;
2578 /* Try shifting backwards. */
2580 if (!prev_inuse(oldp))
2582 prev = prev_chunk(oldp);
2583 prevsize = chunksize(prev);
2585 /* try forward + backward first to save a later consolidation */
2592 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
2594 unlink(prev, bck, fwd);
2596 newsize += prevsize + nextsize;
2597 newmem = chunk2mem(newp);
2598 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2599 top = chunk_at_offset(newp, nb);
2600 set_head(top, (newsize - nb) | PREV_INUSE);
2601 set_head_size(newp, nb);
2606 /* into next chunk */
2607 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
2609 unlink(next, bck, fwd);
2610 unlink(prev, bck, fwd);
2612 newsize += nextsize + prevsize;
2613 newmem = chunk2mem(newp);
2614 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2620 if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
2622 unlink(prev, bck, fwd);
2624 newsize += prevsize;
2625 newmem = chunk2mem(newp);
2626 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2633 newmem = mALLOc (bytes);
2635 if (newmem == 0) /* propagate failure */
2638 /* Avoid copy if newp is next chunk after oldp. */
2639 /* (This can only happen when new chunk is sbrk'ed.) */
2641 if ( (newp = mem2chunk(newmem)) == next_chunk(oldp))
2643 newsize += chunksize(newp);
2648 /* Otherwise copy, free, and exit */
2649 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2655 split: /* split off extra room in old or expanded chunk */
2657 if (newsize - nb >= MINSIZE) /* split off remainder */
2659 remainder = chunk_at_offset(newp, nb);
2660 remainder_size = newsize - nb;
2661 set_head_size(newp, nb);
2662 set_head(remainder, remainder_size | PREV_INUSE);
2663 set_inuse_bit_at_offset(remainder, remainder_size);
2664 fREe(chunk2mem(remainder)); /* let free() deal with it */
2668 set_head_size(newp, newsize);
2669 set_inuse_bit_at_offset(newp, newsize);
2672 check_inuse_chunk(newp);
2673 return chunk2mem(newp);
2680 memalign requests more than enough space from malloc, finds a spot
2681 within that chunk that meets the alignment request, and then
2682 possibly frees the leading and trailing space.
2684 The alignment argument must be a power of two. This property is not
2685 checked by memalign, so misuse may result in random runtime errors.
2687 8-byte alignment is guaranteed by normal malloc calls, so don't
2688 bother calling memalign with an argument of 8 or less.
2690 Overreliance on memalign is a sure way to fragment space.
2696 Void_t* mEMALIGn(size_t alignment, size_t bytes)
2698 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
2701 INTERNAL_SIZE_T nb; /* padded request size */
2702 char* m; /* memory returned by malloc call */
2703 mchunkptr p; /* corresponding chunk */
2704 char* brk; /* alignment point within p */
2705 mchunkptr newp; /* chunk to return */
2706 INTERNAL_SIZE_T newsize; /* its size */
2707 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
2708 mchunkptr remainder; /* spare room at end to split off */
2709 long remainder_size; /* its size */
2711 if ((long)bytes < 0) return 0;
2713 /* If need less alignment than we give anyway, just relay to malloc */
2715 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
2717 /* Otherwise, ensure that it is at least a minimum chunk size */
2719 if (alignment < MINSIZE) alignment = MINSIZE;
2721 /* Call malloc with worst case padding to hit alignment. */
2723 nb = request2size(bytes);
2724 m = (char*)(mALLOc(nb + alignment + MINSIZE));
2726 if (m == 0) return 0; /* propagate failure */
2730 if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
2733 if(chunk_is_mmapped(p))
2734 return chunk2mem(p); /* nothing more to do */
2737 else /* misaligned */
2740 Find an aligned spot inside chunk.
2741 Since we need to give back leading space in a chunk of at
2742 least MINSIZE, if the first calculation places us at
2743 a spot with less than MINSIZE leader, we can move to the
2744 next aligned spot -- we've allocated enough total room so that
2745 this is always possible.
2748 brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -((signed) alignment));
2749 if ((long)(brk - (char*)(p)) < MINSIZE) brk = brk + alignment;
2751 newp = (mchunkptr)brk;
2752 leadsize = brk - (char*)(p);
2753 newsize = chunksize(p) - leadsize;
2756 if(chunk_is_mmapped(p))
2758 newp->prev_size = p->prev_size + leadsize;
2759 set_head(newp, newsize|IS_MMAPPED);
2760 return chunk2mem(newp);
2764 /* give back leader, use the rest */
2766 set_head(newp, newsize | PREV_INUSE);
2767 set_inuse_bit_at_offset(newp, newsize);
2768 set_head_size(p, leadsize);
2772 assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
2775 /* Also give back spare room at the end */
2777 remainder_size = chunksize(p) - nb;
2779 if (remainder_size >= (long)MINSIZE)
2781 remainder = chunk_at_offset(p, nb);
2782 set_head(remainder, remainder_size | PREV_INUSE);
2783 set_head_size(p, nb);
2784 fREe(chunk2mem(remainder));
2787 check_inuse_chunk(p);
2788 return chunk2mem(p);
2793 valloc just invokes memalign with alignment argument equal
2794 to the page size of the system (or as near to this as can
2795 be figured out from all the includes/defines above.)
2799 Void_t* vALLOc(size_t bytes)
2801 Void_t* vALLOc(bytes) size_t bytes;
2804 return mEMALIGn (malloc_getpagesize, bytes);
2808 pvalloc just invokes valloc for the nearest pagesize
2809 that will accommodate request
2814 Void_t* pvALLOc(size_t bytes)
2816 Void_t* pvALLOc(bytes) size_t bytes;
2819 size_t pagesize = malloc_getpagesize;
2820 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
2825 calloc calls malloc, then zeroes out the allocated chunk.
2830 Void_t* cALLOc(size_t n, size_t elem_size)
2832 Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
2836 INTERNAL_SIZE_T csz;
2838 INTERNAL_SIZE_T sz = n * elem_size;
2841 /* check if expand_top called, in which case don't need to clear */
2843 mchunkptr oldtop = top;
2844 INTERNAL_SIZE_T oldtopsize = chunksize(top);
2846 Void_t* mem = mALLOc (sz);
2848 if ((long)n < 0) return 0;
2856 /* Two optional cases in which clearing not necessary */
2860 if (chunk_is_mmapped(p)) return mem;
2866 if (p == oldtop && csz > oldtopsize)
2868 /* clear only the bytes from non-freshly-sbrked memory */
2873 MALLOC_ZERO(mem, csz - SIZE_SZ);
2880 cfree just calls free. It is needed/defined on some systems
2881 that pair it with calloc, presumably for odd historical reasons.
2885 #if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
2887 void cfree(Void_t *mem)
2889 void cfree(mem) Void_t *mem;
2898 Malloc_trim gives memory back to the system (via negative
2899 arguments to sbrk) if there is unused memory at the `high' end of
2900 the malloc pool. You can call this after freeing large blocks of
2901 memory to potentially reduce the system-level memory requirements
2902 of a program. However, it cannot guarantee to reduce memory. Under
2903 some allocation patterns, some large free blocks of memory will be
2904 locked between two used chunks, so they cannot be given back to
2907 The `pad' argument to malloc_trim represents the amount of free
2908 trailing space to leave untrimmed. If this argument is zero,
2909 only the minimum amount of memory to maintain internal data
2910 structures will be left (one page or less). Non-zero arguments
2911 can be supplied to maintain enough trailing space to service
2912 future expected allocations without having to re-obtain memory
2915 Malloc_trim returns 1 if it actually released any memory, else 0.
2920 int malloc_trim(size_t pad)
2922 int malloc_trim(pad) size_t pad;
2925 long top_size; /* Amount of top-most memory */
2926 long extra; /* Amount to release */
2927 char* current_brk; /* address returned by pre-check sbrk call */
2928 char* new_brk; /* address returned by negative sbrk call */
2930 unsigned long pagesz = malloc_getpagesize;
2932 top_size = chunksize(top);
2933 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
2935 if (extra < (long)pagesz) /* Not enough memory to release */
2940 /* Test to make sure no one else called sbrk */
2941 current_brk = (char*)(MORECORE (0));
2942 if (current_brk != (char*)(top) + top_size)
2943 return 0; /* Apparently we don't own memory; must fail */
2947 new_brk = (char*)(MORECORE (-extra));
2949 if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
2951 /* Try to figure out what we have */
2952 current_brk = (char*)(MORECORE (0));
2953 top_size = current_brk - (char*)top;
2954 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
2956 sbrked_mem = current_brk - sbrk_base;
2957 set_head(top, top_size | PREV_INUSE);
2965 /* Success. Adjust top accordingly. */
2966 set_head(top, (top_size - extra) | PREV_INUSE);
2967 sbrked_mem -= extra;
2978 This routine tells you how many bytes you can actually use in an
2979 allocated chunk, which may be more than you requested (although
2980 often not). You can use this many bytes without worrying about
2981 overwriting other allocated objects. Not a particularly great
2982 programming practice, but still sometimes useful.
2987 size_t malloc_usable_size(Void_t* mem)
2989 size_t malloc_usable_size(mem) Void_t* mem;
2998 if(!chunk_is_mmapped(p))
3000 if (!inuse(p)) return 0;
3001 check_inuse_chunk(p);
3002 return chunksize(p) - SIZE_SZ;
3004 return chunksize(p) - 2*SIZE_SZ;
3008 /* Utility to update current_mallinfo for malloc_stats and mallinfo() */
3011 static void malloc_update_mallinfo()
3020 INTERNAL_SIZE_T avail = chunksize(top);
3021 int navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3023 for (i = 1; i < NAV; ++i)
3026 for (p = last(b); p != b; p = p->bk)
3029 check_free_chunk(p);
3030 for (q = next_chunk(p);
3031 q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
3033 check_inuse_chunk(q);
3035 avail += chunksize(p);
3040 current_mallinfo.ordblks = navail;
3041 current_mallinfo.uordblks = sbrked_mem - avail;
3042 current_mallinfo.fordblks = avail;
3043 current_mallinfo.hblks = n_mmaps;
3044 current_mallinfo.hblkhd = mmapped_mem;
3045 current_mallinfo.keepcost = chunksize(top);
3054 Prints on the amount of space obtain from the system (both
3055 via sbrk and mmap), the maximum amount (which may be more than
3056 current if malloc_trim and/or munmap got called), the maximum
3057 number of simultaneous mmap regions used, and the current number
3058 of bytes allocated via malloc (or realloc, etc) but not yet
3059 freed. (Note that this is the number of bytes allocated, not the
3060 number requested. It will be larger than the number requested
3061 because of alignment and bookkeeping overhead.)
3068 malloc_update_mallinfo();
3069 printf("max system bytes = %10u\n",
3070 (unsigned int)(max_total_mem));
3071 printf("system bytes = %10u\n",
3072 (unsigned int)(sbrked_mem + mmapped_mem));
3073 printf("in use bytes = %10u\n",
3074 (unsigned int)(current_mallinfo.uordblks + mmapped_mem));
3076 printf("max mmap regions = %10u\n",
3077 (unsigned int)max_n_mmaps);
3083 mallinfo returns a copy of updated current mallinfo.
3087 struct mallinfo mALLINFo()
3089 malloc_update_mallinfo();
3090 return current_mallinfo;
3097 mallopt is the general SVID/XPG interface to tunable parameters.
3098 The format is to provide a (parameter-number, parameter-value) pair.
3099 mallopt then sets the corresponding parameter to the argument
3100 value if it can (i.e., so long as the value is meaningful),
3101 and returns 1 if successful else 0.
3103 See descriptions of tunable parameters above.
3108 int mALLOPt(int param_number, int value)
3110 int mALLOPt(param_number, value) int param_number; int value;
3113 switch(param_number)
3115 case M_TRIM_THRESHOLD:
3116 trim_threshold = value; return 1;
3118 top_pad = value; return 1;
3119 case M_MMAP_THRESHOLD:
3120 mmap_threshold = value; return 1;
3123 n_mmaps_max = value; return 1;
3125 if (value != 0) return 0; else n_mmaps_max = value; return 1;
3137 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
3138 * return null for negative arguments
3139 * Added Several WIN32 cleanups from Martin C. Fong <mcfong@yahoo.com>
3140 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
3141 (e.g. WIN32 platforms)
3142 * Cleanup up header file inclusion for WIN32 platforms
3143 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
3144 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
3145 memory allocation routines
3146 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
3147 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
3148 usage of 'assert' in non-WIN32 code
3149 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
3151 * Always call 'fREe()' rather than 'free()'
3153 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
3154 * Fixed ordering problem with boundary-stamping
3156 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
3157 * Added pvalloc, as recommended by H.J. Liu
3158 * Added 64bit pointer support mainly from Wolfram Gloger
3159 * Added anonymously donated WIN32 sbrk emulation
3160 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3161 * malloc_extend_top: fix mask error that caused wastage after
3163 * Add linux mremap support code from HJ Liu
3165 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
3166 * Integrated most documentation with the code.
3167 * Add support for mmap, with help from
3168 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3169 * Use last_remainder in more cases.
3170 * Pack bins using idea from colin@nyx10.cs.du.edu
3171 * Use ordered bins instead of best-fit threshhold
3172 * Eliminate block-local decls to simplify tracing and debugging.
3173 * Support another case of realloc via move into top
3174 * Fix error occuring when initial sbrk_base not word-aligned.
3175 * Rely on page size for units instead of SBRK_UNIT to
3176 avoid surprises about sbrk alignment conventions.
3177 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3178 (raymond@es.ele.tue.nl) for the suggestion.
3179 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3180 * More precautions for cases where other routines call sbrk,
3181 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3182 * Added macros etc., allowing use in linux libc from
3183 H.J. Lu (hjl@gnu.ai.mit.edu)
3184 * Inverted this history list
3186 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
3187 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3188 * Removed all preallocation code since under current scheme
3189 the work required to undo bad preallocations exceeds
3190 the work saved in good cases for most test programs.
3191 * No longer use return list or unconsolidated bins since
3192 no scheme using them consistently outperforms those that don't
3193 given above changes.
3194 * Use best fit for very large chunks to prevent some worst-cases.
3195 * Added some support for debugging
3197 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
3198 * Removed footers when chunks are in use. Thanks to
3199 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3201 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
3202 * Added malloc_trim, with help from Wolfram Gloger
3203 (wmglo@Dent.MED.Uni-Muenchen.DE).
3205 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
3207 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
3208 * realloc: try to expand in both directions
3209 * malloc: swap order of clean-bin strategy;
3210 * realloc: only conditionally expand backwards
3211 * Try not to scavenge used bins
3212 * Use bin counts as a guide to preallocation
3213 * Occasionally bin return list chunks in first scan
3214 * Add a few optimizations from colin@nyx10.cs.du.edu
3216 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
3217 * faster bin computation & slightly different binning
3218 * merged all consolidations to one part of malloc proper
3219 (eliminating old malloc_find_space & malloc_clean_bin)
3220 * Scan 2 returns chunks (not just 1)
3221 * Propagate failure in realloc if malloc returns 0
3222 * Add stuff to allow compilation on non-ANSI compilers
3223 from kpv@research.att.com
3225 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
3226 * removed potential for odd address access in prev_chunk
3227 * removed dependency on getpagesize.h
3228 * misc cosmetics and a bit more internal documentation
3229 * anticosmetics: mangled names in macros to evade debugger strangeness
3230 * tested on sparc, hp-700, dec-mips, rs6000
3231 with gcc & native cc (hp, dec only) allowing
3232 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3234 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
3235 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3236 structure of old version, but most details differ.)