计算机代写|COMP2310 Assignment 1: Malloc Implementation of a dynamic memory allocator

这是一篇来自澳洲的关于动态内存分配器通过Malloc实现的计算机代写

Managing memory is a major part of programming in C. You have used malloc() and free() in the recent labs. You have also built a very basic memory allocator, and it is now time to build a more advanced allocator. In this assignment, you will implement a memory allocator, which allows users to malloc() and free() memory as needed. Your allocator will request large chunks of memory from the OS and efficiently manage all the bookkeeping and memory. The allocator we ask you to implement is inspired by the DLMalloc allocator designed by Doug Lea. The DLMalloc allocator also inspired the PTMalloc allocator, which GLibC currently uses. Indeed, our allocator is a simplified version of DLMalloc, but you will also notice many similarities.

We hope that the last two labs have motivated the need for dynamic memory allocators.

Specifically, we have seen that while it is certainly possible to use the low-level mmap and munmap functions to manage areas of virtual memory, programmers need the convenience and efficiency of more fine-grained memory allocators. If we managed the memory from the OS ourselves, we could allow allocating and freeing variables in any order, and also reuse memory for other variables.

The last lab taught you how best to build an implicit free list allocator for managing free blocks. In this assignment, we will first build a more efficient free list data structure called an explicit free list, and then perform a number of optimizations.

The block allocation time with an implicit free list is linear in the total number of heap blocks which is not suitable for a high-performance allocator. We can add a next and previous pointer to each block’s metadata so that we can iterate over the unallocated blocks. The resulting linked list data structure is called an explicit free list. Using a doubly linked list instead of a free list reduces the first-fit allocation time from linear in the total number of blocks to linear in the total number of free blocks.

Fragmentation occurs when otherwise unused memory is not available to satisfy allocate requests. This phenomenon happens because when we split up large blocks into smaller ones to fulfill user requests for memory, we end up with many small blocks. However, some of those blocks may be able to be merged back into a larger block. To address this issue requires us to iterate over the free list and make an effort to find if the block we are trying to free is adjacent to another already free block. If neighboring blocks are free, we can coalesce them into a single larger block.

One detail we must consider is how to handle the edges of the chunks from the OS. If we simply start the first allocable block at the beginning of the memory chunk, then we may run into problems when trying to free the block later. This is because a block at the edge of the chunk is missing a neighbor. A simple solution to this is to insert a pair of fenceposts at either end of the chunk. The fencepost is a dummy header containing no allocable memory, but which serves as a neighbor to the first and last allocable blocks in the chunk.

Now we can look up the neighbors of those blocks and don’t have to worry about accidentally coalescing outside of the memory chunk allocated by the OS, because anytime one of the neighbors is a fencepost we cannot coalesce in that direction.

We will also perform the following optimizations as part of the assignment to improve the space and time complexity of our memory allocator.

The allocator may be unable to find a fit for the requested block. If the free blocks are already maximally coalesced, then the allocator asks the kernel for additional heap memory by calling mmap . The allocator transforms the additional memory into one large free block, inserts the block in the free list, and then places the requested block in this new free block.

As we know already, when an application requests a block of k bytes, the allocator searches the free list for a free block that is large enough to hold the requested block.

Placement policy dictates the manner in which the allocator performs this search. There are three popular policies.