-#if 0 /* Moved to malloc.h */
-/* ---------- To make a malloc.h, start cutting here ------------ */
-
-/*
- A version of malloc/free/realloc written by Doug Lea and released to the
- public domain. Send questions/comments/complaints/performance data
- to dl@cs.oswego.edu
-
-* VERSION 2.6.6 Sun Mar 5 19:10:03 2000 Doug Lea (dl at gee)
-
- Note: There may be an updated version of this malloc obtainable at
- ftp://g.oswego.edu/pub/misc/malloc.c
- Check before installing!
-
-* Why use this malloc?
-
- This is not the fastest, most space-conserving, most portable, or
- most tunable malloc ever written. However it is among the fastest
- while also being among the most space-conserving, portable and tunable.
- Consistent balance across these factors results in a good general-purpose
- allocator. For a high-level description, see
- http://g.oswego.edu/dl/html/malloc.html
-
-* Synopsis of public routines
-
- (Much fuller descriptions are contained in the program documentation below.)
-
- malloc(size_t n);
- Return a pointer to a newly allocated chunk of at least n bytes, or null
- if no space is available.
- free(Void_t* p);
- Release the chunk of memory pointed to by p, or no effect if p is null.
- realloc(Void_t* p, size_t n);
- Return a pointer to a chunk of size n that contains the same data
- as does chunk p up to the minimum of (n, p's size) bytes, or null
- if no space is available. The returned pointer may or may not be
- the same as p. If p is null, equivalent to malloc. Unless the
- #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
- size argument of zero (re)allocates a minimum-sized chunk.
- memalign(size_t alignment, size_t n);
- Return a pointer to a newly allocated chunk of n bytes, aligned
- in accord with the alignment argument, which must be a power of
- two.
- valloc(size_t n);
- Equivalent to memalign(pagesize, n), where pagesize is the page
- size of the system (or as near to this as can be figured out from
- all the includes/defines below.)
- pvalloc(size_t n);
- Equivalent to valloc(minimum-page-that-holds(n)), that is,
- round up n to nearest pagesize.
- calloc(size_t unit, size_t quantity);
- Returns a pointer to quantity * unit bytes, with all locations
- set to zero.
- cfree(Void_t* p);
- Equivalent to free(p).
- malloc_trim(size_t pad);
- Release all but pad bytes of freed top-most memory back
- to the system. Return 1 if successful, else 0.
- malloc_usable_size(Void_t* p);
- Report the number usable allocated bytes associated with allocated
- chunk p. This may or may not report more bytes than were requested,
- due to alignment and minimum size constraints.
- malloc_stats();
- Prints brief summary statistics.
- mallinfo()
- Returns (by copy) a struct containing various summary statistics.
- mallopt(int parameter_number, int parameter_value)
- Changes one of the tunable parameters described below. Returns
- 1 if successful in changing the parameter, else 0.
-
-* Vital statistics:
-
- Alignment: 8-byte
- 8 byte alignment is currently hardwired into the design. This
- seems to suffice for all current machines and C compilers.
-
- Assumed pointer representation: 4 or 8 bytes
- Code for 8-byte pointers is untested by me but has worked
- reliably by Wolfram Gloger, who contributed most of the
- changes supporting this.
-
- Assumed size_t representation: 4 or 8 bytes
- Note that size_t is allowed to be 4 bytes even if pointers are 8.
-
- Minimum overhead per allocated chunk: 4 or 8 bytes
- Each malloced chunk has a hidden overhead of 4 bytes holding size
- and status information.
-
- Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
- 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
-
- When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
- ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
- needed; 4 (8) for a trailing size field
- and 8 (16) bytes for free list pointers. Thus, the minimum
- allocatable size is 16/24/32 bytes.
-
- Even a request for zero bytes (i.e., malloc(0)) returns a
- pointer to something of the minimum allocatable size.
-
- Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
- 8-byte size_t: 2^63 - 16 bytes
-
- It is assumed that (possibly signed) size_t bit values suffice to
- represent chunk sizes. `Possibly signed' is due to the fact
- that `size_t' may be defined on a system as either a signed or
- an unsigned type. To be conservative, values that would appear
- as negative numbers are avoided.
- Requests for sizes with a negative sign bit when the request
- size is treaded as a long will return null.
-
- Maximum overhead wastage per allocated chunk: normally 15 bytes
-
- Alignnment demands, plus the minimum allocatable size restriction
- make the normal worst-case wastage 15 bytes (i.e., up to 15
- more bytes will be allocated than were requested in malloc), with
- two exceptions:
- 1. Because requests for zero bytes allocate non-zero space,
- the worst case wastage for a request of zero bytes is 24 bytes.
- 2. For requests >= mmap_threshold that are serviced via
- mmap(), the worst case wastage is 8 bytes plus the remainder
- from a system page (the minimal mmap unit); typically 4096 bytes.
-
-* Limitations
-
- Here are some features that are NOT currently supported
-
- * No user-definable hooks for callbacks and the like.
- * No automated mechanism for fully checking that all accesses
- to malloced memory stay within their bounds.
- * No support for compaction.
-
-* Synopsis of compile-time options:
-
- People have reported using previous versions of this malloc on all
- versions of Unix, sometimes by tweaking some of the defines
- below. It has been tested most extensively on Solaris and
- Linux. It is also reported to work on WIN32 platforms.
- People have also reported adapting this malloc for use in
- stand-alone embedded systems.
-
- The implementation is in straight, hand-tuned ANSI C. Among other
- consequences, it uses a lot of macros. Because of this, to be at
- all usable, this code should be compiled using an optimizing compiler
- (for example gcc -O2) that can simplify expressions and control
- paths.
-
- __STD_C (default: derived from C compiler defines)
- Nonzero if using ANSI-standard C compiler, a C++ compiler, or
- a C compiler sufficiently close to ANSI to get away with it.
- DEBUG (default: NOT defined)
- Define to enable debugging. Adds fairly extensive assertion-based
- checking to help track down memory errors, but noticeably slows down
- execution.
- REALLOC_ZERO_BYTES_FREES (default: NOT defined)
- Define this if you think that realloc(p, 0) should be equivalent
- to free(p). Otherwise, since malloc returns a unique pointer for
- malloc(0), so does realloc(p, 0).
- HAVE_MEMCPY (default: defined)
- Define if you are not otherwise using ANSI STD C, but still
- have memcpy and memset in your C library and want to use them.
- Otherwise, simple internal versions are supplied.
- USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
- Define as 1 if you want the C library versions of memset and
- memcpy called in realloc and calloc (otherwise macro versions are used).
- At least on some platforms, the simple macro versions usually
- outperform libc versions.
- HAVE_MMAP (default: defined as 1)
- Define to non-zero to optionally make malloc() use mmap() to
- allocate very large blocks.
- HAVE_MREMAP (default: defined as 0 unless Linux libc set)
- Define to non-zero to optionally make realloc() use mremap() to
- reallocate very large blocks.
- malloc_getpagesize (default: derived from system #includes)
- Either a constant or routine call returning the system page size.
- HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
- Optionally define if you are on a system with a /usr/include/malloc.h
- that declares struct mallinfo. It is not at all necessary to
- define this even if you do, but will ensure consistency.
- INTERNAL_SIZE_T (default: size_t)
- Define to a 32-bit type (probably `unsigned int') if you are on a
- 64-bit machine, yet do not want or need to allow malloc requests of
- greater than 2^31 to be handled. This saves space, especially for
- very small chunks.
- INTERNAL_LINUX_C_LIB (default: NOT defined)
- Defined only when compiled as part of Linux libc.
- Also note that there is some odd internal name-mangling via defines
- (for example, internally, `malloc' is named `mALLOc') needed
- when compiling in this case. These look funny but don't otherwise
- affect anything.
- WIN32 (default: undefined)
- Define this on MS win (95, nt) platforms to compile in sbrk emulation.
- LACKS_UNISTD_H (default: undefined if not WIN32)
- Define this if your system does not have a <unistd.h>.
- LACKS_SYS_PARAM_H (default: undefined if not WIN32)
- Define this if your system does not have a <sys/param.h>.
- MORECORE (default: sbrk)
- The name of the routine to call to obtain more memory from the system.
- MORECORE_FAILURE (default: -1)
- The value returned upon failure of MORECORE.
- MORECORE_CLEARS (default 1)
- True (1) if the routine mapped to MORECORE zeroes out memory (which
- holds for sbrk).
- DEFAULT_TRIM_THRESHOLD
- DEFAULT_TOP_PAD
- DEFAULT_MMAP_THRESHOLD
- DEFAULT_MMAP_MAX
- Default values of tunable parameters (described in detail below)
- controlling interaction with host system routines (sbrk, mmap, etc).
- These values may also be changed dynamically via mallopt(). The
- preset defaults are those that give best performance for typical
- programs/systems.
- USE_DL_PREFIX (default: undefined)
- Prefix all public routines with the string 'dl'. Useful to
- quickly avoid procedure declaration conflicts and linker symbol
- conflicts with existing memory allocation routines.
-
-
-*/
-
-\f
-
-
-/* Preliminaries */
-
-#ifndef __STD_C
-#ifdef __STDC__
-#define __STD_C 1
-#else
-#if __cplusplus
-#define __STD_C 1
-#else
-#define __STD_C 0
-#endif /*__cplusplus*/
-#endif /*__STDC__*/
-#endif /*__STD_C*/
-
-#ifndef Void_t
-#if (__STD_C || defined(WIN32))
-#define Void_t void
-#else
-#define Void_t char
-#endif
-#endif /*Void_t*/
-
-#if __STD_C
-#include <stddef.h> /* for size_t */
-#else
-#include <sys/types.h>
-#endif
+#include <common.h>
+#include <log.h>
-#ifdef __cplusplus
-extern "C" {
+#if CONFIG_IS_ENABLED(UNIT_TEST)
+#define DEBUG
#endif
-#include <stdio.h> /* needed for malloc_stats */
-
-
-/*
- Compile-time options
-*/
-
-
-/*
- Debugging:
-
- Because freed chunks may be overwritten with link fields, this
- malloc will often die when freed memory is overwritten by user
- programs. This can be very effective (albeit in an annoying way)
- in helping track down dangling pointers.
-
- If you compile with -DDEBUG, a number of assertion checks are
- enabled that will catch more memory errors. You probably won't be
- able to make much sense of the actual assertion errors, but they
- should help you locate incorrectly overwritten memory. The
- checking is fairly extensive, and will slow down execution
- noticeably. Calling malloc_stats or mallinfo with DEBUG set will
- attempt to check every non-mmapped allocated and free chunk in the
- course of computing the summmaries. (By nature, mmapped regions
- cannot be checked very much automatically.)
-
- Setting DEBUG may also be helpful if you are trying to modify
- this code. The assertions in the check routines spell out in more
- detail the assumptions and invariants underlying the algorithms.
-
-*/
+#include <malloc.h>
+#include <asm/io.h>
#ifdef DEBUG
-#include <assert.h>
-#else
-#define assert(x) ((void)0)
-#endif
-
-
-/*
- INTERNAL_SIZE_T is the word-size used for internal bookkeeping
- of chunk sizes. On a 64-bit machine, you can reduce malloc
- overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
- at the expense of not being able to handle requests greater than
- 2^31. This limitation is hardly ever a concern; you are encouraged
- to set this. However, the default version is the same as size_t.
-*/
-
-#ifndef INTERNAL_SIZE_T
-#define INTERNAL_SIZE_T size_t
-#endif
-
-/*
- REALLOC_ZERO_BYTES_FREES should be set if a call to
- realloc with zero bytes should be the same as a call to free.
- Some people think it should. Otherwise, since this malloc
- returns a unique pointer for malloc(0), so does realloc(p, 0).
-*/
-
-
-/* #define REALLOC_ZERO_BYTES_FREES */
-
-
-/*
- WIN32 causes an emulation of sbrk to be compiled in
- mmap-based options are not currently supported in WIN32.
-*/
-
-/* #define WIN32 */
-#ifdef WIN32
-#define MORECORE wsbrk
-#define HAVE_MMAP 0
-
-#define LACKS_UNISTD_H
-#define LACKS_SYS_PARAM_H
-
-/*
- Include 'windows.h' to get the necessary declarations for the
- Microsoft Visual C++ data structures and routines used in the 'sbrk'
- emulation.
-
- Define WIN32_LEAN_AND_MEAN so that only the essential Microsoft
- Visual C++ header files are included.
-*/
-#define WIN32_LEAN_AND_MEAN
-#include <windows.h>
-#endif
-
-
-/*
- HAVE_MEMCPY should be defined if you are not otherwise using
- ANSI STD C, but still have memcpy and memset in your C library
- and want to use them in calloc and realloc. Otherwise simple
- macro versions are defined here.
-
- USE_MEMCPY should be defined as 1 if you actually want to
- have memset and memcpy called. People report that the macro
- versions are often enough faster than libc versions on many
- systems that it is better to use them.
-
-*/
-
-#define HAVE_MEMCPY
-
-#ifndef USE_MEMCPY
-#ifdef HAVE_MEMCPY
-#define USE_MEMCPY 1
-#else
-#define USE_MEMCPY 0
-#endif
-#endif
-
-#if (__STD_C || defined(HAVE_MEMCPY))
-
-#if __STD_C
-void* memset(void*, int, size_t);
-void* memcpy(void*, const void*, size_t);
-#else
-#ifdef WIN32
-// On Win32 platforms, 'memset()' and 'memcpy()' are already declared in
-// 'windows.h'
-#else
-Void_t* memset();
-Void_t* memcpy();
-#endif
-#endif
-#endif
-
-#if USE_MEMCPY
-
-/* The following macros are only invoked with (2n+1)-multiples of
- INTERNAL_SIZE_T units, with a positive integer n. This is exploited
- for fast inline execution when n is small. */
-
-#define MALLOC_ZERO(charp, nbytes) \
-do { \
- INTERNAL_SIZE_T mzsz = (nbytes); \
- if(mzsz <= 9*sizeof(mzsz)) { \
- INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
- if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
- *mz++ = 0; \
- if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
- *mz++ = 0; \
- if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
- *mz++ = 0; }}} \
- *mz++ = 0; \
- *mz++ = 0; \
- *mz = 0; \
- } else memset((charp), 0, mzsz); \
-} while(0)
-
-#define MALLOC_COPY(dest,src,nbytes) \
-do { \
- INTERNAL_SIZE_T mcsz = (nbytes); \
- if(mcsz <= 9*sizeof(mcsz)) { \
- INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
- INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
- if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
- *mcdst++ = *mcsrc++; \
- if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
- *mcdst++ = *mcsrc++; \
- if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
- *mcdst++ = *mcsrc++; }}} \
- *mcdst++ = *mcsrc++; \
- *mcdst++ = *mcsrc++; \
- *mcdst = *mcsrc ; \
- } else memcpy(dest, src, mcsz); \
-} while(0)
-
-#else /* !USE_MEMCPY */
-
-/* Use Duff's device for good zeroing/copying performance. */
-
-#define MALLOC_ZERO(charp, nbytes) \
-do { \
- INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
- long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
- if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
- switch (mctmp) { \
- case 0: for(;;) { *mzp++ = 0; \
- case 7: *mzp++ = 0; \
- case 6: *mzp++ = 0; \
- case 5: *mzp++ = 0; \
- case 4: *mzp++ = 0; \
- case 3: *mzp++ = 0; \
- case 2: *mzp++ = 0; \
- case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
- } \
-} while(0)
-
-#define MALLOC_COPY(dest,src,nbytes) \
-do { \
- INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
- INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
- long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
- if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
- switch (mctmp) { \
- case 0: for(;;) { *mcdst++ = *mcsrc++; \
- case 7: *mcdst++ = *mcsrc++; \
- case 6: *mcdst++ = *mcsrc++; \
- case 5: *mcdst++ = *mcsrc++; \
- case 4: *mcdst++ = *mcsrc++; \
- case 3: *mcdst++ = *mcsrc++; \
- case 2: *mcdst++ = *mcsrc++; \
- case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
- } \
-} while(0)
-
-#endif
-
-
-/*
- Define HAVE_MMAP to optionally make malloc() use mmap() to
- allocate very large blocks. These will be returned to the
- operating system immediately after a free().
-*/
-
-#ifndef HAVE_MMAP
-#define HAVE_MMAP 1
-#endif
-
-/*
- Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
- large blocks. This is currently only possible on Linux with
- kernel versions newer than 1.3.77.
-*/
-
-#ifndef HAVE_MREMAP
-#ifdef INTERNAL_LINUX_C_LIB
-#define HAVE_MREMAP 1
-#else
-#define HAVE_MREMAP 0
-#endif
-#endif
-
-#if HAVE_MMAP
-
-#include <unistd.h>
-#include <fcntl.h>
-#include <sys/mman.h>
-
-#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
-#define MAP_ANONYMOUS MAP_ANON
-#endif
-
-#endif /* HAVE_MMAP */
-
-/*
- Access to system page size. To the extent possible, this malloc
- manages memory from the system in page-size units.
-
- The following mechanics for getpagesize were adapted from
- bsd/gnu getpagesize.h
-*/
-
-#ifndef LACKS_UNISTD_H
-# include <unistd.h>
-#endif
-
-#ifndef malloc_getpagesize
-# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
-# ifndef _SC_PAGE_SIZE
-# define _SC_PAGE_SIZE _SC_PAGESIZE
-# endif
-# endif
-# ifdef _SC_PAGE_SIZE
-# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
-# else
-# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
- extern size_t getpagesize();
-# define malloc_getpagesize getpagesize()
-# else
-# ifdef WIN32
-# define malloc_getpagesize (4096) /* TBD: Use 'GetSystemInfo' instead */
-# else
-# ifndef LACKS_SYS_PARAM_H
-# include <sys/param.h>
-# endif
-# ifdef EXEC_PAGESIZE
-# define malloc_getpagesize EXEC_PAGESIZE
-# else
-# ifdef NBPG
-# ifndef CLSIZE
-# define malloc_getpagesize NBPG
-# else
-# define malloc_getpagesize (NBPG * CLSIZE)
-# endif
-# else
-# ifdef NBPC
-# define malloc_getpagesize NBPC
-# else
-# ifdef PAGESIZE
-# define malloc_getpagesize PAGESIZE
-# else
-# define malloc_getpagesize (4096) /* just guess */
-# endif
-# endif
-# endif
-# endif
-# endif
-# endif
-# endif
-#endif
-
-
-
-/*
-
- This version of malloc supports the standard SVID/XPG mallinfo
- routine that returns a struct containing the same kind of
- information you can get from malloc_stats. It should work on
- any SVID/XPG compliant system that has a /usr/include/malloc.h
- defining struct mallinfo. (If you'd like to install such a thing
- yourself, cut out the preliminary declarations as described above
- and below and save them in a malloc.h file. But there's no
- compelling reason to bother to do this.)
-
- The main declaration needed is the mallinfo struct that is returned
- (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
- bunch of fields, most of which are not even meaningful in this
- version of malloc. Some of these fields are are instead filled by
- mallinfo() with other numbers that might possibly be of interest.
-
- HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
- /usr/include/malloc.h file that includes a declaration of struct
- mallinfo. If so, it is included; else an SVID2/XPG2 compliant
- version is declared below. These must be precisely the same for
- mallinfo() to work.
-
-*/
-
-/* #define HAVE_USR_INCLUDE_MALLOC_H */
-
-#if HAVE_USR_INCLUDE_MALLOC_H
-#include "/usr/include/malloc.h"
-#else
-
-/* SVID2/XPG mallinfo structure */
-
-struct mallinfo {
- int arena; /* total space allocated from system */
- int ordblks; /* number of non-inuse chunks */
- int smblks; /* unused -- always zero */
- int hblks; /* number of mmapped regions */
- int hblkhd; /* total space in mmapped regions */
- int usmblks; /* unused -- always zero */
- int fsmblks; /* unused -- always zero */
- int uordblks; /* total allocated space */
- int fordblks; /* total non-inuse space */
- int keepcost; /* top-most, releasable (via malloc_trim) space */
-};
-
-/* SVID2/XPG mallopt options */
-
-#define M_MXFAST 1 /* UNUSED in this malloc */
-#define M_NLBLKS 2 /* UNUSED in this malloc */
-#define M_GRAIN 3 /* UNUSED in this malloc */
-#define M_KEEP 4 /* UNUSED in this malloc */
-
-#endif
-
-/* mallopt options that actually do something */
-
-#define M_TRIM_THRESHOLD -1
-#define M_TOP_PAD -2
-#define M_MMAP_THRESHOLD -3
-#define M_MMAP_MAX -4
-
-
-
-#ifndef DEFAULT_TRIM_THRESHOLD
-#define DEFAULT_TRIM_THRESHOLD (128 * 1024)
-#endif
-
-/*
- M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
- to keep before releasing via malloc_trim in free().
-
- Automatic trimming is mainly useful in long-lived programs.
- Because trimming via sbrk can be slow on some systems, and can
- sometimes be wasteful (in cases where programs immediately
- afterward allocate more large chunks) the value should be high
- enough so that your overall system performance would improve by
- releasing.
-
- The trim threshold and the mmap control parameters (see below)
- can be traded off with one another. Trimming and mmapping are
- two different ways of releasing unused memory back to the
- system. Between these two, it is often possible to keep
- system-level demands of a long-lived program down to a bare
- minimum. For example, in one test suite of sessions measuring
- the XF86 X server on Linux, using a trim threshold of 128K and a
- mmap threshold of 192K led to near-minimal long term resource
- consumption.
-
- If you are using this malloc in a long-lived program, it should
- pay to experiment with these values. As a rough guide, you
- might set to a value close to the average size of a process
- (program) running on your system. Releasing this much memory
- would allow such a process to run in memory. Generally, it's
- worth it to tune for trimming rather tham memory mapping when a
- program undergoes phases where several large chunks are
- allocated and released in ways that can reuse each other's
- storage, perhaps mixed with phases where there are no such
- chunks at all. And in well-behaved long-lived programs,
- controlling release of large blocks via trimming versus mapping
- is usually faster.
-
- However, in most programs, these parameters serve mainly as
- protection against the system-level effects of carrying around
- massive amounts of unneeded memory. Since frequent calls to
- sbrk, mmap, and munmap otherwise degrade performance, the default
- parameters are set to relatively high values that serve only as
- safeguards.
-
- The default trim value is high enough to cause trimming only in
- fairly extreme (by current memory consumption standards) cases.
- It must be greater than page size to have any useful effect. To
- disable trimming completely, you can set to (unsigned long)(-1);
-
-
-*/
-
-
-#ifndef DEFAULT_TOP_PAD
-#define DEFAULT_TOP_PAD (0)
-#endif
-
-/*
- M_TOP_PAD is the amount of extra `padding' space to allocate or
- retain whenever sbrk is called. It is used in two ways internally:
-
- * When sbrk is called to extend the top of the arena to satisfy
- a new malloc request, this much padding is added to the sbrk
- request.
-
- * When malloc_trim is called automatically from free(),
- it is used as the `pad' argument.
-
- In both cases, the actual amount of padding is rounded
- so that the end of the arena is always a system page boundary.
-
- The main reason for using padding is to avoid calling sbrk so
- often. Having even a small pad greatly reduces the likelihood
- that nearly every malloc request during program start-up (or
- after trimming) will invoke sbrk, which needlessly wastes
- time.
-
- Automatic rounding-up to page-size units is normally sufficient
- to avoid measurable overhead, so the default is 0. However, in
- systems where sbrk is relatively slow, it can pay to increase
- this value, at the expense of carrying around more memory than
- the program needs.
-
-*/
-
-
-#ifndef DEFAULT_MMAP_THRESHOLD
-#define DEFAULT_MMAP_THRESHOLD (128 * 1024)
-#endif
-
-/*
-
- M_MMAP_THRESHOLD is the request size threshold for using mmap()
- to service a request. Requests of at least this size that cannot
- be allocated using already-existing space will be serviced via mmap.
- (If enough normal freed space already exists it is used instead.)
-
- Using mmap segregates relatively large chunks of memory so that
- they can be individually obtained and released from the host
- system. A request serviced through mmap is never reused by any
- other request (at least not directly; the system may just so
- happen to remap successive requests to the same locations).
-
- Segregating space in this way has the benefit that mmapped space
- can ALWAYS be individually released back to the system, which
- helps keep the system level memory demands of a long-lived
- program low. Mapped memory can never become `locked' between
- other chunks, as can happen with normally allocated chunks, which
- menas that even trimming via malloc_trim would not release them.
-
- However, it has the disadvantages that:
-
- 1. The space cannot be reclaimed, consolidated, and then
- used to service later requests, as happens with normal chunks.
- 2. It can lead to more wastage because of mmap page alignment
- requirements
- 3. It causes malloc performance to be more dependent on host
- system memory management support routines which may vary in
- implementation quality and may impose arbitrary
- limitations. Generally, servicing a request via normal
- malloc steps is faster than going through a system's mmap.
-
- All together, these considerations should lead you to use mmap
- only for relatively large requests.
-
-
-*/
-
-
-
-#ifndef DEFAULT_MMAP_MAX
-#if HAVE_MMAP
-#define DEFAULT_MMAP_MAX (64)
-#else
-#define DEFAULT_MMAP_MAX (0)
-#endif
-#endif
-
-/*
- M_MMAP_MAX is the maximum number of requests to simultaneously
- service using mmap. This parameter exists because:
-
- 1. Some systems have a limited number of internal tables for
- use by mmap.
- 2. In most systems, overreliance on mmap can degrade overall
- performance.
- 3. If a program allocates many large regions, it is probably
- better off using normal sbrk-based allocation routines that
- can reclaim and reallocate normal heap memory. Using a
- small value allows transition into this mode after the
- first few allocations.
-
- Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
- the default value is 0, and attempts to set it to non-zero values
- in mallopt will fail.
-*/
-
-
-
-
-/*
- USE_DL_PREFIX will prefix all public routines with the string 'dl'.
- Useful to quickly avoid procedure declaration conflicts and linker
- symbol conflicts with existing memory allocation routines.
-
-*/
-
-/* #define USE_DL_PREFIX */
-
-
-
-
-/*
-
- Special defines for linux libc
-
- Except when compiled using these special defines for Linux libc
- using weak aliases, this malloc is NOT designed to work in
- multithreaded applications. No semaphores or other concurrency
- control are provided to ensure that multiple malloc or free calls
- don't run at the same time, which could be disasterous. A single
- semaphore could be used across malloc, realloc, and free (which is
- essentially the effect of the linux weak alias approach). It would
- be hard to obtain finer granularity.
-
-*/
-
-
-#ifdef INTERNAL_LINUX_C_LIB
-
-#if __STD_C
-
-Void_t * __default_morecore_init (ptrdiff_t);
-Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
-
-#else
-
-Void_t * __default_morecore_init ();
-Void_t *(*__morecore)() = __default_morecore_init;
-
-#endif
-
-#define MORECORE (*__morecore)
-#define MORECORE_FAILURE 0
-#define MORECORE_CLEARS 1
-
-#else /* INTERNAL_LINUX_C_LIB */
-
-#if __STD_C
-extern Void_t* sbrk(ptrdiff_t);
-#else
-extern Void_t* sbrk();
-#endif
-
-#ifndef MORECORE
-#define MORECORE sbrk
-#endif
-
-#ifndef MORECORE_FAILURE
-#define MORECORE_FAILURE -1
-#endif
-
-#ifndef MORECORE_CLEARS
-#define MORECORE_CLEARS 1
-#endif
-
-#endif /* INTERNAL_LINUX_C_LIB */
-
-#if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
-
-#define cALLOc __libc_calloc
-#define fREe __libc_free
-#define mALLOc __libc_malloc
-#define mEMALIGn __libc_memalign
-#define rEALLOc __libc_realloc
-#define vALLOc __libc_valloc
-#define pvALLOc __libc_pvalloc
-#define mALLINFo __libc_mallinfo
-#define mALLOPt __libc_mallopt
-
-#pragma weak calloc = __libc_calloc
-#pragma weak free = __libc_free
-#pragma weak cfree = __libc_free
-#pragma weak malloc = __libc_malloc
-#pragma weak memalign = __libc_memalign
-#pragma weak realloc = __libc_realloc
-#pragma weak valloc = __libc_valloc
-#pragma weak pvalloc = __libc_pvalloc
-#pragma weak mallinfo = __libc_mallinfo
-#pragma weak mallopt = __libc_mallopt
-
-#else
-
-#ifdef USE_DL_PREFIX
-#define cALLOc dlcalloc
-#define fREe dlfree
-#define mALLOc dlmalloc
-#define mEMALIGn dlmemalign
-#define rEALLOc dlrealloc
-#define vALLOc dlvalloc
-#define pvALLOc dlpvalloc
-#define mALLINFo dlmallinfo
-#define mALLOPt dlmallopt
-#else /* USE_DL_PREFIX */
-#define cALLOc calloc
-#define fREe free
-#define mALLOc malloc
-#define mEMALIGn memalign
-#define rEALLOc realloc
-#define vALLOc valloc
-#define pvALLOc pvalloc
-#define mALLINFo mallinfo
-#define mALLOPt mallopt
-#endif /* USE_DL_PREFIX */
-
-#endif
-
-/* Public routines */
-
-#if __STD_C
-
-Void_t* mALLOc(size_t);
-void fREe(Void_t*);
-Void_t* rEALLOc(Void_t*, size_t);
-Void_t* mEMALIGn(size_t, size_t);
-Void_t* vALLOc(size_t);
-Void_t* pvALLOc(size_t);
-Void_t* cALLOc(size_t, size_t);
-void cfree(Void_t*);
-int malloc_trim(size_t);
-size_t malloc_usable_size(Void_t*);
-void malloc_stats();
-int mALLOPt(int, int);
-struct mallinfo mALLINFo(void);
-#else
-Void_t* mALLOc();
-void fREe();
-Void_t* rEALLOc();
-Void_t* mEMALIGn();
-Void_t* vALLOc();
-Void_t* pvALLOc();
-Void_t* cALLOc();
-void cfree();
-int malloc_trim();
-size_t malloc_usable_size();
-void malloc_stats();
-int mALLOPt();
-struct mallinfo mALLINFo();
-#endif
-
-
-#ifdef __cplusplus
-}; /* end of extern "C" */
-#endif
-
-/* ---------- To make a malloc.h, end cutting here ------------ */
-#else /* Moved to malloc.h */
-
-#include <malloc.h>
-#if 0
#if __STD_C
static void malloc_update_mallinfo (void);
void malloc_stats (void);
static void malloc_update_mallinfo ();
void malloc_stats();
#endif
-#endif /* 0 */
+#endif /* DEBUG */
-#endif /* 0 */ /* Moved to malloc.h */
-#include <common.h>
+DECLARE_GLOBAL_DATA_PTR;
/*
Emulation of sbrk for WIN32
rval = VirtualFree ((void*)gAddressBase,
gNextAddress - gAddressBase,
MEM_DECOMMIT);
- assert (rval);
+ assert (rval);
}
while (head)
{
return start_address;
else
{
- // Requested region is not available so see if the
- // next region is available. Set 'start_address'
- // to the next region and call 'VirtualQuery()'
- // again.
+ /* Requested region is not available so see if the */
+ /* next region is available. Set 'start_address' */
+ /* to the next region and call 'VirtualQuery()' */
+ /* again. */
start_address = (char*)info.BaseAddress + info.RegionSize;
- // Make sure we start looking for the next region
- // on the *next* 64K boundary. Otherwise, even if
- // the new region is free according to
- // 'VirtualQuery()', the subsequent call to
- // 'VirtualAlloc()' (which follows the call to
- // this routine in 'wsbrk()') will round *down*
- // the requested address to a 64K boundary which
- // we already know is an address in the
- // unavailable region. Thus, the subsequent call
- // to 'VirtualAlloc()' will fail and bring us back
- // here, causing us to go into an infinite loop.
+ /* Make sure we start looking for the next region */
+ /* on the *next* 64K boundary. Otherwise, even if */
+ /* the new region is free according to */
+ /* 'VirtualQuery()', the subsequent call to */
+ /* 'VirtualAlloc()' (which follows the call to */
+ /* this routine in 'wsbrk()') will round *down* */
+ /* the requested address to a 64K boundary which */
+ /* we already know is an address in the */
+ /* unavailable region. Thus, the subsequent call */
+ /* to 'VirtualAlloc()' will fail and bring us back */
+ /* here, causing us to go into an infinite loop. */
start_address =
(void *) AlignPage64K((unsigned long) start_address);
{
new_address = findRegion (new_address, new_size);
- if (new_address == 0)
+ if (!new_address)
return (void*)-1;
gAddressBase = gNextAddress =
(unsigned int)VirtualAlloc (new_address, new_size,
MEM_RESERVE, PAGE_NOACCESS);
- // repeat in case of race condition
- // The region that we found has been snagged
- // by another thread
+ /* repeat in case of race condition */
+ /* The region that we found has been snagged */
+ /* by another thread */
}
while (gAddressBase == 0);
(size + gNextAddress -
AlignPage (gNextAddress)),
MEM_COMMIT, PAGE_READWRITE);
- if (res == 0)
+ if (!res)
return (void*)-1;
}
tmp = (void*)gNextAddress;
#endif
-\f
+
/*
Type declarations
INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
struct malloc_chunk* fd; /* double links -- used only if free. */
struct malloc_chunk* bk;
-};
+} __attribute__((__may_alias__)) ;
typedef struct malloc_chunk* mchunkptr;
chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Size of previous chunk, if allocated | |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Size of chunk, in bytes |P|
+ | Size of previous chunk, if allocated | |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | Size of chunk, in bytes |P|
mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | User data starts here... .
- . .
- . (malloc_usable_space() bytes) .
- . |
+ | User data starts here... .
+ . .
+ . (malloc_usable_space() bytes) .
+ . |
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Size of chunk |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | Size of chunk |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Where "chunk" is the front of the chunk for the purpose of most of
Free chunks are stored in circular doubly-linked lists, and look like this:
chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Size of previous chunk |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | Size of previous chunk |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
`head:' | Size of chunk, in bytes |P|
mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Forward pointer to next chunk in list |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Back pointer to previous chunk in list |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Unused space (may be 0 bytes long) .
- . .
- . |
+ | Forward pointer to next chunk in list |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | Back pointer to previous chunk in list |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | Unused space (may be 0 bytes long) .
+ . .
+ . |
+
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
`foot:' | Size of chunk, in bytes |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
The P (PREV_INUSE) bit, stored in the unused low-order bit of the
chunk size (which is always a multiple of two words), is an in-use
The two exceptions to all this are
1. The special chunk `top', which doesn't bother using the
- trailing size field since there is no
- next contiguous chunk that would have to index off it. (After
- initialization, `top' is forced to always exist. If it would
- become less than MINSIZE bytes long, it is replenished via
- malloc_extend_top.)
+ trailing size field since there is no
+ next contiguous chunk that would have to index off it. (After
+ initialization, `top' is forced to always exist. If it would
+ become less than MINSIZE bytes long, it is replenished via
+ malloc_extend_top.)
2. Chunks allocated via mmap, which have the second-lowest-order
- bit (IS_MMAPPED) set in their size fields. Because they are
- never merged or traversed from any other chunk, they have no
- foot size or inuse information.
+ bit (IS_MMAPPED) set in their size fields. Because they are
+ never merged or traversed from any other chunk, they have no
+ foot size or inuse information.
Available chunks are kept in any of several places (all declared below):
*/
-
-
-\f
-
-
/* sizes, alignments */
#define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
#define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
-\f
+
/*
Physical chunk operations
#define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
-\f
+
/*
Dealing with use bits
(((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
-\f
+
/*
Dealing with size fields
#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
-\f
+
/*
indexing, maintain locality, and avoid some initialization tests.
*/
-#define top (bin_at(0)->fd) /* The topmost chunk */
+#define top (av_[2]) /* The topmost chunk */
#define last_remainder (bin_at(1)) /* remainder from last split */
#define IAV(i) bin_at(i), bin_at(i)
static mbinptr av_[NAV * 2 + 2] = {
- 0, 0,
+ NULL, NULL,
IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
};
-void malloc_bin_reloc (void)
+#ifdef CONFIG_NEEDS_MANUAL_RELOC
+static void malloc_bin_reloc(void)
{
- DECLARE_GLOBAL_DATA_PTR;
+ mbinptr *p = &av_[2];
+ size_t i;
- unsigned long *p = (unsigned long *)(&av_[2]);
- int i;
- for (i=2; i<(sizeof(av_)/sizeof(mbinptr)); ++i) {
- *p++ += gd->reloc_off;
- }
+ for (i = 2; i < ARRAY_SIZE(av_); ++i, ++p)
+ *p = (mbinptr)((ulong)*p + gd->reloc_off);
+}
+#else
+static inline void malloc_bin_reloc(void) {}
+#endif
+
+#ifdef CONFIG_SYS_MALLOC_DEFAULT_TO_INIT
+static void malloc_init(void);
+#endif
+
+ulong mem_malloc_start = 0;
+ulong mem_malloc_end = 0;
+ulong mem_malloc_brk = 0;
+
+void *sbrk(ptrdiff_t increment)
+{
+ ulong old = mem_malloc_brk;
+ ulong new = old + increment;
+
+ /*
+ * if we are giving memory back make sure we clear it out since
+ * we set MORECORE_CLEARS to 1
+ */
+ if (increment < 0)
+ memset((void *)new, 0, -increment);
+
+ if ((new < mem_malloc_start) || (new > mem_malloc_end))
+ return (void *)MORECORE_FAILURE;
+
+ mem_malloc_brk = new;
+
+ return (void *)old;
+}
+
+void mem_malloc_init(ulong start, ulong size)
+{
+ mem_malloc_start = start;
+ mem_malloc_end = start + size;
+ mem_malloc_brk = start;
+
+#ifdef CONFIG_SYS_MALLOC_DEFAULT_TO_INIT
+ malloc_init();
+#endif
+
+ debug("using memory %#lx-%#lx for malloc()\n", mem_malloc_start,
+ mem_malloc_end);
+#ifdef CONFIG_SYS_MALLOC_CLEAR_ON_INIT
+ memset((void *)mem_malloc_start, 0x0, size);
+#endif
+ malloc_bin_reloc();
}
-\f
/* field-extraction macros */
((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12): \
((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15): \
((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
- 126)
+ 126)
/*
bins for chunks < 512 are all spaced 8 bytes apart, and hold
identically sized chunks. This is exploited in malloc.
#define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
-\f
+
/*
To help compensate for the large number of bins, a one-level index
#define BINBLOCKWIDTH 4 /* bins per block */
-#define binblocks (bin_at(0)->size) /* bitvector of nonempty blocks */
+#define binblocks_r ((INTERNAL_SIZE_T)av_[1]) /* bitvector of nonempty blocks */
+#define binblocks_w (av_[1])
/* bin<->block macros */
#define idx2binblock(ix) ((unsigned)1 << (ix / BINBLOCKWIDTH))
-#define mark_binblock(ii) (binblocks |= idx2binblock(ii))
-#define clear_binblock(ii) (binblocks &= ~(idx2binblock(ii)))
+#define mark_binblock(ii) (binblocks_w = (mbinptr)(binblocks_r | idx2binblock(ii)))
+#define clear_binblock(ii) (binblocks_w = (mbinptr)(binblocks_r & ~(idx2binblock(ii))))
+
-\f
/* Other static bookkeeping data */
/* Tracking mmaps */
-#if 0
+#ifdef DEBUG
static unsigned int n_mmaps = 0;
-#endif /* 0 */
+#endif /* DEBUG */
static unsigned long mmapped_mem = 0;
#if HAVE_MMAP
static unsigned int max_n_mmaps = 0;
static unsigned long max_mmapped_mem = 0;
#endif
-\f
+#ifdef CONFIG_SYS_MALLOC_DEFAULT_TO_INIT
+static void malloc_init(void)
+{
+ int i, j;
+
+ debug("bins (av_ array) are at %p\n", (void *)av_);
+
+ av_[0] = NULL; av_[1] = NULL;
+ for (i = 2, j = 2; i < NAV * 2 + 2; i += 2, j++) {
+ av_[i] = bin_at(j - 2);
+ av_[i + 1] = bin_at(j - 2);
+
+ /* Just print the first few bins so that
+ * we can see there are alright.
+ */
+ if (i < 10)
+ debug("av_[%d]=%lx av_[%d]=%lx\n",
+ i, (ulong)av_[i],
+ i + 1, (ulong)av_[i + 1]);
+ }
+
+ /* Init the static bookkeeping as well */
+ sbrk_base = (char *)(-1);
+ max_sbrked_mem = 0;
+ max_total_mem = 0;
+#ifdef DEBUG
+ memset((void *)¤t_mallinfo, 0, sizeof(struct mallinfo));
+#endif
+}
+#endif
/*
Debugging support
static void do_check_chunk(p) mchunkptr p;
#endif
{
-#if 0 /* causes warnings because assert() is off */
INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
-#endif /* 0 */
/* No checkable chunk is mmapped */
assert(!chunk_is_mmapped(p));
#endif
{
INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
-#if 0 /* causes warnings because assert() is off */
mchunkptr next = chunk_at_offset(p, sz);
-#endif /* 0 */
do_check_chunk(p);
static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
#endif
{
-#if 0 /* causes warnings because assert() is off */
INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
long room = sz - s;
-#endif /* 0 */
do_check_inuse_chunk(p);
#define check_malloced_chunk(P,N)
#endif
-\f
+
/*
Macro-based internal utilities
-\f
/* Routines dealing with mmap(). */
#endif /* HAVE_MMAP */
-
-\f
-
/*
Extend the top-most chunk by obtaining memory from system.
Main interface to sbrk (but see also malloc_trim).
/* Guarantee the next brk will be at a page boundary */
correction += ((((unsigned long)(brk + sbrk_size))+(pagesz-1)) &
- ~(pagesz - 1)) - ((unsigned long)(brk + sbrk_size));
+ ~(pagesz - 1)) - ((unsigned long)(brk + sbrk_size));
/* Allocate correction */
new_brk = (char*)(MORECORE (correction));
/* If not enough space to do this, then user did something very wrong */
if (old_top_size < MINSIZE)
{
- set_head(top, PREV_INUSE); /* will force null return from malloc */
- return;
+ set_head(top, PREV_INUSE); /* will force null return from malloc */
+ return;
}
/* Also keep size a multiple of MALLOC_ALIGNMENT */
old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
set_head_size(old_top, old_top_size);
chunk_at_offset(old_top, old_top_size )->size =
- SIZE_SZ|PREV_INUSE;
+ SIZE_SZ|PREV_INUSE;
chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
- SIZE_SZ|PREV_INUSE;
+ SIZE_SZ|PREV_INUSE;
/* If possible, release the rest. */
if (old_top_size >= MINSIZE)
- fREe(chunk2mem(old_top));
+ fREe(chunk2mem(old_top));
}
}
}
-\f
+
/* Main public routines */
From there, the first successful of the following steps is taken:
1. The bin corresponding to the request size is scanned, and if
- a chunk of exactly the right size is found, it is taken.
+ a chunk of exactly the right size is found, it is taken.
2. The most recently remaindered chunk is used if it is big
- enough. This is a form of (roving) first fit, used only in
- the absence of exact fits. Runs of consecutive requests use
- the remainder of the chunk used for the previous such request
- whenever possible. This limited use of a first-fit style
- allocation strategy tends to give contiguous chunks
- coextensive lifetimes, which improves locality and can reduce
- fragmentation in the long run.
+ enough. This is a form of (roving) first fit, used only in
+ the absence of exact fits. Runs of consecutive requests use
+ the remainder of the chunk used for the previous such request
+ whenever possible. This limited use of a first-fit style
+ allocation strategy tends to give contiguous chunks
+ coextensive lifetimes, which improves locality and can reduce
+ fragmentation in the long run.
3. Other bins are scanned in increasing size order, using a
- chunk big enough to fulfill the request, and splitting off
- any remainder. This search is strictly by best-fit; i.e.,
- the smallest (with ties going to approximately the least
- recently used) chunk that fits is selected.
+ chunk big enough to fulfill the request, and splitting off
+ any remainder. This search is strictly by best-fit; i.e.,
+ the smallest (with ties going to approximately the least
+ recently used) chunk that fits is selected.
4. If large enough, the chunk bordering the end of memory
- (`top') is split off. (This use of `top' is in accord with
- the best-fit search rule. In effect, `top' is treated as
- larger (and thus less well fitting) than any other available
- chunk since it can be extended to be as large as necessary
- (up to system limitations).
+ (`top') is split off. (This use of `top' is in accord with
+ the best-fit search rule. In effect, `top' is treated as
+ larger (and thus less well fitting) than any other available
+ chunk since it can be extended to be as large as necessary
+ (up to system limitations).
5. If the request size meets the mmap threshold and the
- system supports mmap, and there are few enough currently
- allocated mmapped regions, and a call to mmap succeeds,
- the request is allocated via direct memory mapping.
+ system supports mmap, and there are few enough currently
+ allocated mmapped regions, and a call to mmap succeeds,
+ the request is allocated via direct memory mapping.
6. Otherwise, the top of memory is extended by
- obtaining more space from the system (normally using sbrk,
- but definable to anything else via the MORECORE macro).
- Memory is gathered from the system (in system page-sized
- units) in a way that allows chunks obtained across different
- sbrk calls to be consolidated, but does not require
- contiguous memory. Thus, it should be safe to intersperse
- mallocs with other sbrk calls.
+ obtaining more space from the system (normally using sbrk,
+ but definable to anything else via the MORECORE macro).
+ Memory is gathered from the system (in system page-sized
+ units) in a way that allows chunks obtained across different
+ sbrk calls to be consolidated, but does not require
+ contiguous memory. Thus, it should be safe to intersperse
+ mallocs with other sbrk calls.
All allocations are made from the the `lowest' part of any found
INTERNAL_SIZE_T nb;
- if ((long)bytes < 0) return 0;
+#if CONFIG_VAL(SYS_MALLOC_F_LEN)
+ if (!(gd->flags & GD_FLG_FULL_MALLOC_INIT))
+ return malloc_simple(bytes);
+#endif
+
+ /* check if mem_malloc_init() was run */
+ if ((mem_malloc_start == 0) && (mem_malloc_end == 0)) {
+ /* not initialized yet */
+ return NULL;
+ }
+
+ if ((long)bytes < 0) return NULL;
nb = request2size(bytes); /* padded request size; */
if (remainder_size >= (long)MINSIZE) /* too big */
{
- --idx; /* adjust to rescan below after checking last remainder */
- break;
+ --idx; /* adjust to rescan below after checking last remainder */
+ break;
}
else if (remainder_size >= 0) /* exact fit */
{
- unlink(victim, bck, fwd);
- set_inuse_bit_at_offset(victim, victim_size);
- check_malloced_chunk(victim, nb);
- return chunk2mem(victim);
+ unlink(victim, bck, fwd);
+ set_inuse_bit_at_offset(victim, victim_size);
+ check_malloced_chunk(victim, nb);
+ return chunk2mem(victim);
}
}
search for best fitting chunk by scanning bins in blockwidth units.
*/
- if ( (block = idx2binblock(idx)) <= binblocks)
+ if ( (block = idx2binblock(idx)) <= binblocks_r)
{
/* Get to the first marked block */
- if ( (block & binblocks) == 0)
+ if ( (block & binblocks_r) == 0)
{
/* force to an even block boundary */
idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
block <<= 1;
- while ((block & binblocks) == 0)
+ while ((block & binblocks_r) == 0)
{
- idx += BINBLOCKWIDTH;
- block <<= 1;
+ idx += BINBLOCKWIDTH;
+ block <<= 1;
}
}
/* For each bin in this block ... */
do
{
- /* Find and use first big enough chunk ... */
-
- for (victim = last(bin); victim != bin; victim = victim->bk)
- {
- victim_size = chunksize(victim);
- remainder_size = victim_size - nb;
-
- if (remainder_size >= (long)MINSIZE) /* split */
- {
- remainder = chunk_at_offset(victim, nb);
- set_head(victim, nb | PREV_INUSE);
- unlink(victim, bck, fwd);
- link_last_remainder(remainder);
- set_head(remainder, remainder_size | PREV_INUSE);
- set_foot(remainder, remainder_size);
- check_malloced_chunk(victim, nb);
- return chunk2mem(victim);
- }
-
- else if (remainder_size >= 0) /* take */
- {
- set_inuse_bit_at_offset(victim, victim_size);
- unlink(victim, bck, fwd);
- check_malloced_chunk(victim, nb);
- return chunk2mem(victim);
- }
-
- }
+ /* Find and use first big enough chunk ... */
+
+ for (victim = last(bin); victim != bin; victim = victim->bk)
+ {
+ victim_size = chunksize(victim);
+ remainder_size = victim_size - nb;
+
+ if (remainder_size >= (long)MINSIZE) /* split */
+ {
+ remainder = chunk_at_offset(victim, nb);
+ set_head(victim, nb | PREV_INUSE);
+ unlink(victim, bck, fwd);
+ link_last_remainder(remainder);
+ set_head(remainder, remainder_size | PREV_INUSE);
+ set_foot(remainder, remainder_size);
+ check_malloced_chunk(victim, nb);
+ return chunk2mem(victim);
+ }
+
+ else if (remainder_size >= 0) /* take */
+ {
+ set_inuse_bit_at_offset(victim, victim_size);
+ unlink(victim, bck, fwd);
+ check_malloced_chunk(victim, nb);
+ return chunk2mem(victim);
+ }
+
+ }
bin = next_bin(bin);
do /* Possibly backtrack to try to clear a partial block */
{
- if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
- {
- binblocks &= ~block;
- break;
- }
- --startidx;
+ if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
+ {
+ av_[1] = (mbinptr)(binblocks_r & ~block);
+ break;
+ }
+ --startidx;
q = prev_bin(q);
} while (first(q) == q);
/* Get to the next possibly nonempty block */
- if ( (block <<= 1) <= binblocks && (block != 0) )
+ if ( (block <<= 1) <= binblocks_r && (block != 0) )
{
- while ((block & binblocks) == 0)
- {
- idx += BINBLOCKWIDTH;
- block <<= 1;
- }
+ while ((block & binblocks_r) == 0)
+ {
+ idx += BINBLOCKWIDTH;
+ block <<= 1;
+ }
}
else
- break;
+ break;
}
}
#if HAVE_MMAP
/* If big and would otherwise need to extend, try to use mmap instead */
if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
- (victim = mmap_chunk(nb)) != 0)
+ (victim = mmap_chunk(nb)))
return chunk2mem(victim);
#endif
/* Try to extend */
malloc_extend_top(nb);
if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
- return 0; /* propagate failure */
+ return NULL; /* propagate failure */
}
victim = top;
}
-\f
+
/*
2. If the chunk was allocated via mmap, it is release via munmap().
3. If a returned chunk borders the current high end of memory,
- it is consolidated into the top, and if the total unused
- topmost memory exceeds the trim threshold, malloc_trim is
- called.
+ it is consolidated into the top, and if the total unused
+ topmost memory exceeds the trim threshold, malloc_trim is
+ called.
4. Other chunks are consolidated as they arrive, and
- placed in corresponding bins. (This includes the case of
- consolidating with the current `last_remainder').
+ placed in corresponding bins. (This includes the case of
+ consolidating with the current `last_remainder').
*/
mchunkptr fwd; /* misc temp for linking */
int islr; /* track whether merging with last_remainder */
- if (mem == 0) /* free(0) has no effect */
+#if CONFIG_VAL(SYS_MALLOC_F_LEN)
+ /* free() is a no-op - all the memory will be freed on relocation */
+ if (!(gd->flags & GD_FLG_FULL_MALLOC_INIT))
+ return;
+#endif
+
+ if (mem == NULL) /* free(0) has no effect */
return;
p = mem2chunk(mem);
}
-\f
+
/*
mchunkptr fwd; /* misc temp for linking */
#ifdef REALLOC_ZERO_BYTES_FREES
- if (bytes == 0) { fREe(oldmem); return 0; }
+ if (!bytes) {
+ fREe(oldmem);
+ return NULL;
+ }
#endif
- if ((long)bytes < 0) return 0;
+ if ((long)bytes < 0) return NULL;
/* realloc of null is supposed to be same as malloc */
- if (oldmem == 0) return mALLOc(bytes);
+ if (oldmem == NULL) return mALLOc(bytes);
+
+#if CONFIG_VAL(SYS_MALLOC_F_LEN)
+ if (!(gd->flags & GD_FLG_FULL_MALLOC_INIT)) {
+ /* This is harder to support and should not be needed */
+ panic("pre-reloc realloc() is not supported");
+ }
+#endif
newp = oldp = mem2chunk(oldmem);
newsize = oldsize = chunksize(oldp);
if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
/* Must alloc, copy, free. */
newmem = mALLOc(bytes);
- if (newmem == 0) return 0; /* propagate failure */
+ if (!newmem)
+ return NULL; /* propagate failure */
MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
munmap_chunk(oldp);
return newmem;
/* Forward into top only if a remainder */
if (next == top)
{
- if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
- {
- newsize += nextsize;
- top = chunk_at_offset(oldp, nb);
- set_head(top, (newsize - nb) | PREV_INUSE);
- set_head_size(oldp, nb);
- return chunk2mem(oldp);
- }
+ if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
+ {
+ newsize += nextsize;
+ top = chunk_at_offset(oldp, nb);
+ set_head(top, (newsize - nb) | PREV_INUSE);
+ set_head_size(oldp, nb);
+ return chunk2mem(oldp);
+ }
}
/* Forward into next chunk */
else if (((long)(nextsize + newsize) >= (long)(nb)))
{
- unlink(next, bck, fwd);
- newsize += nextsize;
- goto split;
+ unlink(next, bck, fwd);
+ newsize += nextsize;
+ goto split;
}
}
else
{
- next = 0;
+ next = NULL;
nextsize = 0;
}
/* try forward + backward first to save a later consolidation */
- if (next != 0)
+ if (next != NULL)
{
- /* into top */
- if (next == top)
- {
- if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
- {
- unlink(prev, bck, fwd);
- newp = prev;
- newsize += prevsize + nextsize;
- newmem = chunk2mem(newp);
- MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
- top = chunk_at_offset(newp, nb);
- set_head(top, (newsize - nb) | PREV_INUSE);
- set_head_size(newp, nb);
- return newmem;
- }
- }
-
- /* into next chunk */
- else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
- {
- unlink(next, bck, fwd);
- unlink(prev, bck, fwd);
- newp = prev;
- newsize += nextsize + prevsize;
- newmem = chunk2mem(newp);
- MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
- goto split;
- }
+ /* into top */
+ if (next == top)
+ {
+ if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
+ {
+ unlink(prev, bck, fwd);
+ newp = prev;
+ newsize += prevsize + nextsize;
+ newmem = chunk2mem(newp);
+ MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
+ top = chunk_at_offset(newp, nb);
+ set_head(top, (newsize - nb) | PREV_INUSE);
+ set_head_size(newp, nb);
+ return newmem;
+ }
+ }
+
+ /* into next chunk */
+ else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
+ {
+ unlink(next, bck, fwd);
+ unlink(prev, bck, fwd);
+ newp = prev;
+ newsize += nextsize + prevsize;
+ newmem = chunk2mem(newp);
+ MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
+ goto split;
+ }
}
/* backward only */
- if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
+ if (prev != NULL && (long)(prevsize + newsize) >= (long)nb)
{
- unlink(prev, bck, fwd);
- newp = prev;
- newsize += prevsize;
- newmem = chunk2mem(newp);
- MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
- goto split;
+ unlink(prev, bck, fwd);
+ newp = prev;
+ newsize += prevsize;
+ newmem = chunk2mem(newp);
+ MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
+ goto split;
}
}
newmem = mALLOc (bytes);
- if (newmem == 0) /* propagate failure */
- return 0;
+ if (newmem == NULL) /* propagate failure */
+ return NULL;
/* Avoid copy if newp is next chunk after oldp. */
/* (This can only happen when new chunk is sbrk'ed.) */
}
-\f
+
/*
mchunkptr remainder; /* spare room at end to split off */
long remainder_size; /* its size */
- if ((long)bytes < 0) return 0;
+ if ((long)bytes < 0) return NULL;
+
+#if CONFIG_VAL(SYS_MALLOC_F_LEN)
+ if (!(gd->flags & GD_FLG_FULL_MALLOC_INIT)) {
+ return memalign_simple(alignment, bytes);
+ }
+#endif
/* If need less alignment than we give anyway, just relay to malloc */
nb = request2size(bytes);
m = (char*)(mALLOc(nb + alignment + MINSIZE));
- if (m == 0) return 0; /* propagate failure */
+ /*
+ * The attempt to over-allocate (with a size large enough to guarantee the
+ * ability to find an aligned region within allocated memory) failed.
+ *
+ * Try again, this time only allocating exactly the size the user wants. If
+ * the allocation now succeeds and just happens to be aligned, we can still
+ * fulfill the user's request.
+ */
+ if (m == NULL) {
+ size_t extra, extra2;
+ /*
+ * Use bytes not nb, since mALLOc internally calls request2size too, and
+ * each call increases the size to allocate, to account for the header.
+ */
+ m = (char*)(mALLOc(bytes));
+ /* Aligned -> return it */
+ if ((((unsigned long)(m)) % alignment) == 0)
+ return m;
+ /*
+ * Otherwise, try again, requesting enough extra space to be able to
+ * acquire alignment.
+ */
+ fREe(m);
+ /* Add in extra bytes to match misalignment of unexpanded allocation */
+ extra = alignment - (((unsigned long)(m)) % alignment);
+ m = (char*)(mALLOc(bytes + extra));
+ /*
+ * m might not be the same as before. Validate that the previous value of
+ * extra still works for the current value of m.
+ * If (!m), extra2=alignment so
+ */
+ if (m) {
+ extra2 = alignment - (((unsigned long)(m)) % alignment);
+ if (extra2 > extra) {
+ fREe(m);
+ m = NULL;
+ }
+ }
+ /* Fall through to original NULL check and chunk splitting logic */
+ }
+
+ if (m == NULL) return NULL; /* propagate failure */
p = mem2chunk(m);
}
-\f
+
/*
/* check if expand_top called, in which case don't need to clear */
+#ifdef CONFIG_SYS_MALLOC_CLEAR_ON_INIT
#if MORECORE_CLEARS
mchunkptr oldtop = top;
INTERNAL_SIZE_T oldtopsize = chunksize(top);
+#endif
#endif
Void_t* mem = mALLOc (sz);
- if ((long)n < 0) return 0;
+ if ((long)n < 0) return NULL;
- if (mem == 0)
- return 0;
+ if (mem == NULL)
+ return NULL;
else
{
+#if CONFIG_VAL(SYS_MALLOC_F_LEN)
+ if (!(gd->flags & GD_FLG_FULL_MALLOC_INIT)) {
+ memset(mem, 0, sz);
+ return mem;
+ }
+#endif
p = mem2chunk(mem);
/* Two optional cases in which clearing not necessary */
csz = chunksize(p);
+#ifdef CONFIG_SYS_MALLOC_CLEAR_ON_INIT
#if MORECORE_CLEARS
if (p == oldtop && csz > oldtopsize)
{
/* clear only the bytes from non-freshly-sbrked memory */
csz = oldtopsize;
}
+#endif
#endif
MALLOC_ZERO(mem, csz - SIZE_SZ);
}
#endif
-\f
+
/*
if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
{
- /* Try to figure out what we have */
- current_brk = (char*)(MORECORE (0));
- top_size = current_brk - (char*)top;
- if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
- {
- sbrked_mem = current_brk - sbrk_base;
- set_head(top, top_size | PREV_INUSE);
- }
- check_chunk(top);
- return 0;
+ /* Try to figure out what we have */
+ current_brk = (char*)(MORECORE (0));
+ top_size = current_brk - (char*)top;
+ if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
+ {
+ sbrked_mem = current_brk - sbrk_base;
+ set_head(top, top_size | PREV_INUSE);
+ }
+ check_chunk(top);
+ return 0;
}
else
{
- /* Success. Adjust top accordingly. */
- set_head(top, (top_size - extra) | PREV_INUSE);
- sbrked_mem -= extra;
- check_chunk(top);
- return 1;
+ /* Success. Adjust top accordingly. */
+ set_head(top, (top_size - extra) | PREV_INUSE);
+ sbrked_mem -= extra;
+ check_chunk(top);
+ return 1;
}
}
}
}
-\f
+
/*
malloc_usable_size:
#endif
{
mchunkptr p;
- if (mem == 0)
+ if (mem == NULL)
return 0;
else
{
}
-\f
+
/* Utility to update current_mallinfo for malloc_stats and mallinfo() */
-#if 0
+#ifdef DEBUG
static void malloc_update_mallinfo()
{
int i;
#ifdef DEBUG
check_free_chunk(p);
for (q = next_chunk(p);
- q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
- q = next_chunk(q))
- check_inuse_chunk(q);
+ q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
+ q = next_chunk(q))
+ check_inuse_chunk(q);
#endif
avail += chunksize(p);
navail++;
current_mallinfo.keepcost = chunksize(top);
}
-#endif /* 0 */
+#endif /* DEBUG */
+
-\f
/*
*/
-#if 0
+#ifdef DEBUG
void malloc_stats()
{
malloc_update_mallinfo();
printf("max system bytes = %10u\n",
- (unsigned int)(max_total_mem));
+ (unsigned int)(max_total_mem));
printf("system bytes = %10u\n",
- (unsigned int)(sbrked_mem + mmapped_mem));
+ (unsigned int)(sbrked_mem + mmapped_mem));
printf("in use bytes = %10u\n",
- (unsigned int)(current_mallinfo.uordblks + mmapped_mem));
+ (unsigned int)(current_mallinfo.uordblks + mmapped_mem));
#if HAVE_MMAP
printf("max mmap regions = %10u\n",
- (unsigned int)max_n_mmaps);
+ (unsigned int)max_n_mmaps);
#endif
}
-#endif /* 0 */
+#endif /* DEBUG */
/*
mallinfo returns a copy of updated current mallinfo.
*/
-#if 0
+#ifdef DEBUG
struct mallinfo mALLINFo()
{
malloc_update_mallinfo();
return current_mallinfo;
}
-#endif /* 0 */
+#endif /* DEBUG */
+
-\f
/*
mallopt:
}
}
+int initf_malloc(void)
+{
+#if CONFIG_VAL(SYS_MALLOC_F_LEN)
+ assert(gd->malloc_base); /* Set up by crt0.S */
+ gd->malloc_limit = CONFIG_VAL(SYS_MALLOC_F_LEN);
+ gd->malloc_ptr = 0;
+#endif
+
+ return 0;
+}
+
/*
History:
V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
* return null for negative arguments
* Added Several WIN32 cleanups from Martin C. Fong <mcfong@yahoo.com>
- * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
- (e.g. WIN32 platforms)
- * Cleanup up header file inclusion for WIN32 platforms
- * Cleanup code to avoid Microsoft Visual C++ compiler complaints
- * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
- memory allocation routines
- * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
- * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
+ * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
+ (e.g. WIN32 platforms)
+ * Cleanup up header file inclusion for WIN32 platforms
+ * Cleanup code to avoid Microsoft Visual C++ compiler complaints
+ * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
+ memory allocation routines
+ * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
+ * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
usage of 'assert' in non-WIN32 code
- * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
- avoid infinite loop
+ * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
+ avoid infinite loop
* Always call 'fREe()' rather than 'free()'
V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
* Added anonymously donated WIN32 sbrk emulation
* Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
* malloc_extend_top: fix mask error that caused wastage after
- foreign sbrks
+ foreign sbrks
* Add linux mremap support code from HJ Liu
V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
* Integrated most documentation with the code.
* Add support for mmap, with help from
- Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
+ Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
* Use last_remainder in more cases.
* Pack bins using idea from colin@nyx10.cs.du.edu
* Use ordered bins instead of best-fit threshhold
* Support another case of realloc via move into top
* Fix error occuring when initial sbrk_base not word-aligned.
* Rely on page size for units instead of SBRK_UNIT to
- avoid surprises about sbrk alignment conventions.
+ avoid surprises about sbrk alignment conventions.
* Add mallinfo, mallopt. Thanks to Raymond Nijssen
- (raymond@es.ele.tue.nl) for the suggestion.
+ (raymond@es.ele.tue.nl) for the suggestion.
* Add `pad' argument to malloc_trim and top_pad mallopt parameter.
* More precautions for cases where other routines call sbrk,
- courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
+ courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
* Added macros etc., allowing use in linux libc from
- H.J. Lu (hjl@gnu.ai.mit.edu)
+ H.J. Lu (hjl@gnu.ai.mit.edu)
* Inverted this history list
V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
* Re-tuned and fixed to behave more nicely with V2.6.0 changes.
* Removed all preallocation code since under current scheme
- the work required to undo bad preallocations exceeds
- the work saved in good cases for most test programs.
+ the work required to undo bad preallocations exceeds
+ the work saved in good cases for most test programs.
* No longer use return list or unconsolidated bins since
- no scheme using them consistently outperforms those that don't
- given above changes.
+ no scheme using them consistently outperforms those that don't
+ given above changes.
* Use best fit for very large chunks to prevent some worst-cases.
* Added some support for debugging
V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
* Removed footers when chunks are in use. Thanks to
- Paul Wilson (wilson@cs.texas.edu) for the suggestion.
+ Paul Wilson (wilson@cs.texas.edu) for the suggestion.
V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
* Added malloc_trim, with help from Wolfram Gloger
- (wmglo@Dent.MED.Uni-Muenchen.DE).
+ (wmglo@Dent.MED.Uni-Muenchen.DE).
V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
* faster bin computation & slightly different binning
* merged all consolidations to one part of malloc proper
- (eliminating old malloc_find_space & malloc_clean_bin)
+ (eliminating old malloc_find_space & malloc_clean_bin)
* Scan 2 returns chunks (not just 1)
* Propagate failure in realloc if malloc returns 0
* Add stuff to allow compilation on non-ANSI compilers
- from kpv@research.att.com
+ from kpv@research.att.com
V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
* removed potential for odd address access in prev_chunk
* misc cosmetics and a bit more internal documentation
* anticosmetics: mangled names in macros to evade debugger strangeness
* tested on sparc, hp-700, dec-mips, rs6000
- with gcc & native cc (hp, dec only) allowing
- Detlefs & Zorn comparison study (in SIGPLAN Notices.)
+ with gcc & native cc (hp, dec only) allowing
+ Detlefs & Zorn comparison study (in SIGPLAN Notices.)
Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
* Based loosely on libg++-1.2X malloc. (It retains some of the overall
- structure of old version, but most details differ.)
+ structure of old version, but most details differ.)
*/
-
-