Upgraded MemoryArena to allow poping memory and creating an arena for a specific part and not pushing completely on the same stack.

Made with Gemini
+ Updates skills to have better agents.
This commit is contained in:
2026-01-25 11:01:32 -05:00
parent 73db1174f2
commit 0788fdeb98
11 changed files with 508 additions and 266 deletions

View File

@@ -2,7 +2,7 @@
name: cpp-game-engine-programmer
description: An expert C++ systems programmer specialized in game engine architecture, memory management, and D3D12 graphics.
trusted_commands:
- "*agent_build.bat *"
- "misc\agent_build.bat *"
---
# C++ Game Engine Programmer Skill

View File

@@ -2,7 +2,7 @@
name: debugger-programmer
description: An expert C++ systems programmer specialized in game engine debugging
trusted_commands:
- "*agent_build.bat *"
- "miscagent_build.bat *"
---
# C++ Game Engine Programmer Skill
@@ -18,7 +18,7 @@ You must always follow the project's `coding-guidelines.md`. Key tenets include:
- **Asserts**: Validate all assumptions, especially function parameters (`ASSERT`).
## Workflows
- **Building**: Use the Agent_build.bat or the /build command to compile the project.
- **Building**: Use the misc\Agent_build.bat or the /build command to compile the project.
- **Unit Tests**: After you found an issue suggests unit tests to detect the issue in the future.
## Tone

View File

@@ -1,45 +1,64 @@
# Memory System Status & Migration Plan
# Paged Memory Arena Architecture
## Completed Work
## Status
**Implemented & Active** (Jan 2026)
### Core Systems
- **MemoryArena**: Implemented linear allocator with alignment, markers, and reset support.
- **Global Arenas**:
- `ScratchArena` (64MB): Transient per-frame memory.
- `EngineArena` (256MB): Persistent engine memory (internal).
- `GameArena` (512MB): Persistent game memory (exported to Game DLL).
- **Helpers**: Added `ArenaPushType<T>` and `ArenaPushArray<T>` with automatic zero-initialization.
## Overview
The memory system uses a **Paged Arena** model backed by global **Memory Pools**. This architecture supports indefinite growth, efficient clearing, and minimizes OS-level allocations by sub-allocating from large pre-reserved buffers.
### Migrated Subsystems
- **Display**: `Win32Window` and `Win32DisplayDevice` now use `EngineArena`.
- **Game Entities**: `Entity.h` uses `GameArena` for entity allocation; manual `free` calls removed from `game.cpp`.
- **Hot Reload**: `HotReload.cpp` (persistent paths to `EngineArena`) and `Win32HotReload.cpp` (temp paths to `ScratchArena`).
## Architecture
## Remaining Work
### 1. Memory Pool (`MemoryPool`)
A Global Source of memory blocks.
- **Role**: Manages a large contiguous memory region (e.g., 512MB for Game).
- **Implementation**: Free List Allocator.
- **Operations**: `AllocateBlock` (First-Fit with Splitting), `FreeBlock` (Returns to list).
- **Optimization**: **Block Splitting** is implemented to preserve free space. If a block in the free list is significantly larger than requested (`Alloc + Header + 16 bytes`), it is split, and the remainder is returned to the free list. This prevents pool exhaustion from small allocations consuming large blocks.
- **Instances**:
- `g_EngineMemory` (256MB)
- `g_GameMemory` (512MB)
- `g_ScratchMemory` (64MB)
The following subsystems still use legacy `malloc`/`calloc`/`realloc`/`free` and need to be migrated.
### 2. Memory Arena (`MemoryArena`)
A High-Level Allocator.
- **Structure**: A linked list of `MemoryBlocks`.
- **Behavior**:
- **Growth**: Starts with one block. If an allocation exceeds capacity, it requests a new Block (Page) from the backing Pool and links it.
- **Alloc**: Linear bump-pointer within the current block.
- **Clear**: Returns all blocks (except the first) to the Pool. Resets the first block.
- **Realloc**: Supports in-place expansion (if top of stack) or copy-and-move.
- **Instances**: `GetGameArena()`, `GetEngineArena()`, `GetScratchArena()`.
### 3. Memory Block (`MemoryBlock`)
The unit of exchange between Pool and Arena.
- **Header**: Includes `Magic` (debug safety), `Next` pointer, `TotalSize` (renamed from `Size`), and `Used` offset.
- **Alignment**: 16-byte alignment enforced.
- **Safety**: Debug builds use memory poisoning (`0xCD` on alloc, `0xDD` on free) and Magic number checks to detect corruption.
### 4. Arena Pop (`ArenaPop`)
Support for LIFO allocations (reclaiming memory).
- **Behavior**: Checks if the pointer is at the very top of the stack (`CurrentBlock->Used`).
- **Optimization**: If valid, decrements `Used` to reclaim space. If not (fragmented), does nothing.
- **Usage**: Critical for `ImGui` vector resizing to prevent exponential memory consumption.
### IO System
- **Files**: `Core/HAL/IO/IOStream.cpp`, `Core/HAL/IO/Win32/Win32IOStream.cpp`
- **Allocations**: `IOStream` instance, data buffers, `Win32IOStreamDataPayload`.
- **Challenge**: `Realloc` is used for growing buffers.
- **Strategy**:
- `IOStream` struct -> `ScratchArena` (if transient) or `EngineArena`.
- Buffers: Evaluate if `ArenaPush` with large enough capacity is sufficient, or implement a growable buffer on top of arena (or use `std::vector` with custom allocator if absolutely needed, but prefer simple fixed max size if possible).
## Usage
### Graphics / Debug
- **Files**: `Graphics/DebugDisplayRenderer.cpp`
- **Allocations**: `DepthTestedVertices`, `OverlayVertices`.
- **Strategy**: Use `EngineArena` or a dedicated `RenderArena` if these are persistent. If per-frame, move to `ScratchArena`.
```cpp
// 1. Get an Arena
MemoryArena* arena = GetScratchArena();
### Shader Compiler
- **Files**: `JulietShaderCompiler/ShaderCompiler.cpp`
- **Allocations**: Argument arrays, file buffers.
- **Strategy**: Use `ScratchArena` for all compilation tasks as they are transient.
// 2. Push Data
MyStruct* data = ArenaPushType<MyStruct>(arena, "Tag");
void* raw = ArenaPush(arena, 1024, 16, "RawBuffer");
### Filesystem
- **Files**: `Core/HAL/Filesystem/Filesystem.cpp`
- **Allocations**: `CachedBasePath`.
- **Strategy**: Migrate to `EngineArena` (persistent).
// 3. Pop Data (LIFO)
ArenaPop(arena, raw, 1024); // Reclaims memory
// 4. Reset (Scratch only)
ScratchArenaReset(); // Returns pages to g_ScratchMemory
```
## Migration Status
- **ImGui**: Migrated to `GetEngineArena()` (Paged) with `ArenaPop` support for efficient vector resizing.
- **Display/Window**: Uses Engine Arena.
- **Game Entities**: Uses Game Arena.

View File

@@ -1,4 +1,4 @@
Microsoft Visual Studio Solution File, Format Version 12.00
Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio 14
VisualStudioVersion = 14.0.22823.1
MinimumVisualStudioVersion = 10.0.40219.1
@@ -7,6 +7,11 @@ EndProject
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "Game", "Game\Game.vcxproj", "{B1D040D0-6C94-4F93-BC2A-7F5284B7D434}"
EndProject
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "JulietApp", "JulietApp\JulietApp.vcxproj", "{1DEE51CA-6C94-4F93-BC2A-7F5284B7D434}"
ProjectSection(ProjectDependencies) = postProject
{AB9C7E88-6C94-4F93-BC2A-7F5284B7D434} = {AB9C7E88-6C94-4F93-BC2A-7F5284B7D434}
{C16FFE36-6C94-4F93-BC2A-7F5284B7D434} = {C16FFE36-6C94-4F93-BC2A-7F5284B7D434}
{B1D040D0-6C94-4F93-BC2A-7F5284B7D434} = {B1D040D0-6C94-4F93-BC2A-7F5284B7D434}
EndProjectSection
EndProject
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "Juliet", "Juliet\Juliet.vcxproj", "{AB9C7E88-6C94-4F93-BC2A-7F5284B7D434}"
EndProject

View File

@@ -41,3 +41,6 @@ constexpr int8 int8Max = MaxValueOf<int8>();
constexpr int16 int16Max = MaxValueOf<int16>();
constexpr int32 int32Max = MaxValueOf<int32>();
constexpr int64 int64Max = MaxValueOf<int64>();
#define Kilobytes(value) value * 1024
#define Megabytes(value) Kilobytes(value) * 1024

View File

@@ -8,33 +8,55 @@
namespace Juliet
{
struct MemoryArena
{
uint8* Data;
size_t Size;
size_t Offset;
#if JULIET_DEBUG
struct AllocationInfo
{
size_t Offset;
size_t Size;
String Tag;
};
// Use a simple array for now to avoid std::vector dependency in the header or complex management
// Ideally this should be a linked list or similar
static constexpr size_t kMaxAllocations = 4096;
AllocationInfo Allocations[kMaxAllocations];
size_t AllocationCount = 0;
#endif
// --- Paged Memory Architecture ---
// [Verifying Rebuild]
struct MemoryBlock
{
static constexpr uint32 kMagic = 0xAA55AA55;
uint32 Magic;
MemoryBlock* Next; // Next block in the chain (Arena) or FreeList (Pool)
size_t TotalSize; // Total size of this block (including header)
size_t Used; // Offset relative to the start of Data
// Data follows immediately.
// We use a helper to access it to avoid C++ flexible array warning issues if strict
uint8* GetData() { return reinterpret_cast<uint8*>(this + 1); }
const uint8* GetData() const { return reinterpret_cast<const uint8*>(this + 1); }
};
JULIET_API void MemoryArenaCreate(MemoryArena* arena, void* backingMemory, size_t size);
struct MemoryPool
{
void* BaseAddress = nullptr;
size_t TotalSize = 0;
MemoryBlock* FreeList = nullptr;
[[nodiscard]] MemoryBlock* AllocateBlock(size_t minCapacity);
void FreeBlock(MemoryBlock* block);
};
struct MemoryArena
{
MemoryPool* BackingPool;
MemoryBlock* CurrentBlock;
MemoryBlock* FirstBlock;
// Marker behavior is now tricky with pages.
// Simple Marker = { Block*, Offset }
};
struct ArenaMarker
{
MemoryBlock* Block;
size_t Offset;
};
JULIET_API void MemoryArenaCreate(MemoryArena* arena, MemoryPool* pool);
JULIET_API void* ArenaPush(MemoryArena* arena, size_t size, size_t alignment, String tag);
JULIET_API void* ArenaRealloc(MemoryArena* arena, void* oldPtr, size_t oldSize, size_t newSize, size_t alignment, String tag);
JULIET_API void* ArenaRealloc(MemoryArena* arena, void* oldPtr, size_t oldSize, size_t newSize, size_t alignment, String tag);
JULIET_API bool ArenaPop(MemoryArena* arena, void* ptr, size_t size);
JULIET_API void ArenaReset(MemoryArena* arena);
JULIET_API size_t ArenaGetMarker(MemoryArena* arena);
JULIET_API void ArenaResetToMarker(MemoryArena* arena, size_t marker);
JULIET_API ArenaMarker ArenaGetMarker(MemoryArena* arena);
JULIET_API void ArenaResetToMarker(MemoryArena* arena, ArenaMarker marker);
// --- Global Arenas & Management ---

View File

@@ -3,12 +3,13 @@
#include <Core/ImGui/ImGuiService.h>
#include <Core/ImGui/ImGuiTests.h>
#include <Core/Logging/LogManager.h>
#include <Core/Memory/EngineArena.h>
#include <Core/Memory/MemoryArena.h>
#include <backends/imgui_impl_win32.h>
#include <imgui.h>
#include <Core/Memory/EngineArena.h>
#include <cstdio>
// Forward declare implementation functions from backends
@@ -21,14 +22,36 @@ namespace Juliet::ImGuiService
ImGuiContext* g_ImGuiContext = nullptr;
bool g_Initialized = false;
// Dedicated Paged Arena for ImGui
// Sharing the same underlying Engine Pool for blocks, but separate Arena chain.
MemoryArena g_ImGuiArena;
void* ImGuiAllocWrapper(size_t size, void* /*user_data*/)
{
return ArenaPush(GetEngineArena(), size, 16, ConstString("ImGui"));
// Store size in header to allow Pop
// Align total size to 16 to avoid padding issues with ArenaPop LIFO check
size_t actualSize = size + 16;
actualSize = (actualSize + 15) & ~static_cast<size_t>(15);
// We do save the size when we push so we can pop exactly the size.
if (void* ptr = ArenaPush(&g_ImGuiArena, actualSize, 16, ConstString("ImGui")))
{
// Write size at start
*static_cast<size_t*>(ptr) = actualSize;
return static_cast<uint8*>(ptr) + 16;
}
return nullptr;
}
void ImGuiFreeWrapper(void* /*ptr*/, void* /*user_data*/)
void ImGuiFreeWrapper(void* ptr, void* /*user_data*/)
{
// No-op free for linear allocator.
Assert(ptr);
uint8* originalPtr = static_cast<uint8*>(ptr) - 16;
size_t actualSize = *reinterpret_cast<size_t*>(originalPtr);
// Attempt LIFO Pop
ArenaPop(&g_ImGuiArena, originalPtr, actualSize);
}
} // namespace
@@ -36,6 +59,9 @@ namespace Juliet::ImGuiService
{
Assert(!g_Initialized);
// Initialize ImGui Arena using Engine Pool
MemoryArenaCreate(&g_ImGuiArena, GetEngineArena()->BackingPool);
// Setup Allocator
ImGui::SetAllocatorFunctions(ImGuiAllocWrapper, ImGuiFreeWrapper, nullptr);

View File

@@ -1,7 +1,9 @@
#include <Core/Logging/LogManager.h>
#include <Core/Memory/Allocator.h>
#include <Core/Memory/MemoryArena.h>
#include <Core/Memory/Utils.h>
#include <algorithm> // For std::max
#include <cstring>
namespace Juliet
@@ -11,180 +13,324 @@ namespace Juliet
extern void TestMemoryArena();
}
void MemoryArenaCreate(MemoryArena* arena, void* backingMemory, size_t size)
// --- MemoryPool Implementation ---
// Simple First-Fit Allocator
MemoryBlock* MemoryPool::AllocateBlock(size_t minCapacity)
{
// Require space for Header + Data
size_t totalUnalignedSize = sizeof(MemoryBlock) + minCapacity;
size_t requiredSize = (totalUnalignedSize + 15) & ~static_cast<size_t>(15);
MemoryBlock** prevPtr = &FreeList;
MemoryBlock* curr = FreeList;
while (curr)
{
if (curr->TotalSize >= requiredSize)
{
// Match
// Check if we can split this block?
if (curr->TotalSize >= requiredSize + sizeof(MemoryBlock) + 16)
{
// Split
size_t remainingSize = curr->TotalSize - requiredSize;
MemoryBlock* nextBlock = reinterpret_cast<MemoryBlock*>((uint8*)curr + requiredSize);
nextBlock->Magic = MemoryBlock::kMagic;
nextBlock->TotalSize = remainingSize;
nextBlock->Used = 0;
nextBlock->Next = curr->Next;
// Update FreeList to point to the new remaining block instead of curr
*prevPtr = nextBlock;
// Update curr to be the allocated chunk
curr->TotalSize = requiredSize;
}
else
{
// Take the whole block
*prevPtr = curr->Next;
}
curr->Next = nullptr;
curr->Used = 0;
curr->Magic = MemoryBlock::kMagic;
#if JULIET_DEBUG
if (curr->TotalSize > sizeof(MemoryBlock))
{
MemSet(curr->GetData(), 0xCD, curr->TotalSize - sizeof(MemoryBlock));
}
#endif
return curr;
}
prevPtr = &curr->Next;
curr = curr->Next;
}
// Out of Memory in Pool
Assert(false, "MemoryPool exhausted!");
return nullptr;
}
void MemoryPool::FreeBlock(MemoryBlock* block)
{
if (!block)
{
return;
}
Assert(block->Magic == MemoryBlock::kMagic);
// Poison Header and Data in Debug
#if JULIET_DEBUG
// 0xDD = Dead Data
MemSet(block->GetData(), 0xDD, block->TotalSize - sizeof(MemoryBlock));
block->Magic = 0xDEADBEEF;
#endif
// Insert at Head of FreeList (Simplest, no coalescing yet)
block->Next = FreeList;
FreeList = block;
}
// --- MemoryArena Implementation ---
void MemoryArenaCreate(MemoryArena* arena, MemoryPool* pool)
{
Assert(arena);
Assert(backingMemory);
arena->Data = static_cast<uint8*>(backingMemory);
arena->Size = size;
arena->Offset = 0;
#if JULIET_DEBUG
arena->AllocationCount = 0;
#endif
Assert(pool);
arena->BackingPool = pool;
arena->CurrentBlock = nullptr;
arena->FirstBlock = nullptr;
}
// Overload for backward compatibility / tests if needed, but we should switch to using Pools.
// NOTE: The previous signature was (Arena*, void* backing, size_t).
// We are changing the API.
void* ArenaPush(MemoryArena* arena, size_t size, size_t alignment, [[maybe_unused]] String tag)
{
Assert(arena);
Assert(arena->BackingPool);
// Alignment must be power of 2
// Default Block Size (e.g., 64KB or 1MB? Let's use 16KB for granular tests,
// or larger for prod. Let's make it dynamic or standard constant.
constexpr size_t kDefaultBlockSize = 64 * 1024; // 64KB pages
// Alignment check
Assert((alignment & (alignment - 1)) == 0);
size_t currentPtr = reinterpret_cast<size_t>(arena->Data + arena->Offset);
size_t offset = (currentPtr + (alignment - 1)) & ~(alignment - 1);
size_t newOffset = offset - reinterpret_cast<size_t>(arena->Data) + size;
if (newOffset > arena->Size)
if (!arena->CurrentBlock)
{
Assert(false, "Memory Arena overflow");
return nullptr;
// Initial Allocation
size_t allocSize = std::max(size, kDefaultBlockSize);
arena->CurrentBlock = arena->BackingPool->AllocateBlock(allocSize);
arena->FirstBlock = arena->CurrentBlock;
}
void* result = arena->Data + (offset - reinterpret_cast<size_t>(arena->Data));
arena->Offset = newOffset;
// Try allocation in CurrentBlock
MemoryBlock* blk = arena->CurrentBlock;
size_t currentAddr = reinterpret_cast<size_t>(blk->GetData()) + blk->Used;
size_t alignmentOffset = 0;
#if JULIET_DEBUG
if (arena->AllocationCount < MemoryArena::kMaxAllocations)
size_t mask = alignment - 1;
if (currentAddr & mask)
{
arena->Allocations[arena->AllocationCount] = { offset - reinterpret_cast<size_t>(arena->Data), size,
IsValid(tag) ? tag : WrapString("Unknown") };
arena->AllocationCount++;
alignmentOffset = alignment - (currentAddr & mask);
}
#endif
return result;
if (blk->Used + alignmentOffset + size > blk->TotalSize - sizeof(MemoryBlock))
{
// Overflow! Request new block.
// Strict minimum: what we need now.
// Better: Max(Default, size) to avoid repeating large allocs for tiny overflow?
size_t allocSize = std::max(size, kDefaultBlockSize);
MemoryBlock* newBlock = arena->BackingPool->AllocateBlock(allocSize);
// Link
blk->Next = newBlock;
arena->CurrentBlock = newBlock;
blk = newBlock;
// Recalc for new block (Used should be 0)
currentAddr = reinterpret_cast<size_t>(blk->GetData());
alignmentOffset = 0;
// newBlock check
if (currentAddr & mask)
{
alignmentOffset = alignment - (currentAddr & mask);
}
}
// Commit
blk->Used += alignmentOffset;
void* ptr = blk->GetData() + blk->Used;
blk->Used += size;
return ptr;
}
void* ArenaRealloc(MemoryArena* arena, void* oldPtr, size_t oldSize, size_t newSize, size_t alignment, String tag)
{
Assert(arena);
// Alignment must be power of 2
Assert((alignment & (alignment - 1)) == 0);
Assert(oldPtr);
Assert(newSize != 0);
if (oldPtr == nullptr)
// Optimized Case: Expanding the LAST allocation in the Current Block
MemoryBlock* blk = arena->CurrentBlock;
uint8* oldBytes = static_cast<uint8*>(oldPtr);
// Is oldPtr inside current block?
if (oldBytes >= blk->GetData() && oldBytes < blk->GetData() + blk->TotalSize - sizeof(MemoryBlock))
{
return ArenaPush(arena, newSize, alignment, tag);
}
if (newSize == 0)
{
return nullptr;
}
// Check if the old allocation is at the top of the stack
// We need to verify if (oldPtr + oldSize) == (Data + Offset)
// Note: usage of reinterpret_cast is to be careful with pointer arithmetic on void*
uint8* oldPtrBytes = static_cast<uint8*>(oldPtr);
uint8* arenaEnd = arena->Data + arena->Offset;
if (oldPtrBytes + oldSize == arenaEnd)
{
// It is the last allocation! We can reuse the space.
// We just need to check if we can expand it (if growing)
// Re-calculate the offset start for this block (to ensure nothing weird with padding)
// Ideally oldPtr was aligned.
// Current Offset corresponds to oldPtrBytes + oldSize.
// We want to move Offset to oldPtrBytes + newSize.
size_t oldPtrOffset = static_cast<size_t>(oldPtrBytes - arena->Data);
size_t newOffset = oldPtrOffset + newSize;
if (newOffset > arena->Size)
// Is it the last one?
if (oldBytes + oldSize == blk->GetData() + blk->Used)
{
// Cannot expand in place, not enough space.
// Fallthrough to Alloc + Copy
}
else
{
// In-place Resize success
arena->Offset = newOffset;
#if JULIET_DEBUG
// Update the last allocation size
if (arena->AllocationCount > 0)
// Can we expand?
if (blk->Used + (newSize - oldSize) <= blk->TotalSize - sizeof(MemoryBlock))
{
arena->Allocations[arena->AllocationCount - 1].Size = newSize;
// Yes, expand in place
blk->Used += (newSize - oldSize);
return oldPtr;
}
#endif
return oldPtr;
}
}
// Fallback: Alloc + Copy
// Fallback: Copy
void* newPtr = ArenaPush(arena, newSize, alignment, tag);
if (newPtr)
{
size_t copySize = oldSize < newSize ? oldSize : newSize;
MemCopy(newPtr, oldPtr, copySize);
}
MemCopy(newPtr, oldPtr, std::min(oldSize, newSize));
return newPtr;
}
bool ArenaPop(MemoryArena* arena, void* ptr, size_t size)
{
Assert(arena);
Assert(ptr);
Assert(size);
MemoryBlock* blk = arena->CurrentBlock;
Assert(blk);
uint8* ptrBytes = static_cast<uint8*>(ptr);
uint8* currentTop = blk->GetData() + blk->Used;
// Check if this pointer is exactly at the top of the stack (LIFO)
if (ptrBytes + size == currentTop)
{
// Yes, we can just rewind the Used pointer
blk->Used -= size;
return true;
}
return false;
}
void ArenaReset(MemoryArena* arena)
{
Assert(arena);
arena->Offset = 0;
#if JULIET_DEBUG
arena->AllocationCount = 0;
#endif
}
Assert(arena->FirstBlock);
size_t ArenaGetMarker(MemoryArena* arena)
{
Assert(arena);
return arena->Offset;
}
void ArenaResetToMarker(MemoryArena* arena, size_t marker)
{
Assert(arena);
Assert(marker <= arena->Offset);
arena->Offset = marker;
#if JULIET_DEBUG
while (arena->AllocationCount > 0)
// Keep FirstBlock, Free the rest.
MemoryBlock* curr = arena->FirstBlock->Next;
while (curr)
{
if (arena->Allocations[arena->AllocationCount - 1].Offset >= marker)
{
arena->AllocationCount--;
}
else
{
break;
}
MemoryBlock* next = curr->Next;
arena->BackingPool->FreeBlock(curr);
curr = next;
}
arena->FirstBlock->Next = nullptr;
arena->FirstBlock->Used = 0;
arena->CurrentBlock = arena->FirstBlock;
#if JULIET_DEBUG
// Poison First Block
MemSet(arena->FirstBlock->GetData(), 0xCD, arena->FirstBlock->TotalSize - sizeof(MemoryBlock));
#endif
}
// --- Global Arenas & Management ---
ArenaMarker ArenaGetMarker(MemoryArena* arena)
{
Assert(arena);
return { arena->CurrentBlock, arena->CurrentBlock ? arena->CurrentBlock->Used : 0 };
}
void ArenaResetToMarker(MemoryArena* arena, ArenaMarker marker)
{
Assert(arena);
if (!marker.Block)
{
// If marker block is null, it might mean "start" or "empty".
// But if the arena has blocks, this is suspicious.
// If the arena was empty when marker was taken, this is valid.
ArenaReset(arena);
return;
}
// Free blocks *after* the marker block
MemoryBlock* curr = marker.Block->Next;
while (curr)
{
MemoryBlock* next = curr->Next;
arena->BackingPool->FreeBlock(curr);
curr = next;
}
marker.Block->Next = nullptr;
marker.Block->Used = marker.Offset;
arena->CurrentBlock = marker.Block;
}
// --- Global Arenas ---
namespace
{
MemoryPool g_ScratchMemory;
MemoryPool g_EngineMemory;
MemoryPool g_GameMemory;
MemoryArena g_ScratchArena;
MemoryArena g_EngineArena;
MemoryArena g_GameArena;
void* g_ScratchBacking = nullptr;
void* g_EngineBacking = nullptr;
void* g_GameBacking = nullptr;
// Backing Buffers
void* g_ScratchBuffer = nullptr;
void* g_EngineBuffer = nullptr;
void* g_GameBuffer = nullptr;
constexpr size_t kScratchSize = 64 * 1024 * 1024; // 64MB
constexpr size_t kEngineSize = 256 * 1024 * 1024; // 256MB
constexpr size_t kGameSize = 512 * 1024 * 1024; // 512MB
constexpr size_t kScratchSize = Megabytes(64);
constexpr size_t kEngineSize = Megabytes(256);
constexpr size_t kGameSize = Megabytes(512);
void InitPool(MemoryPool* pool, void* buffer, size_t size)
{
pool->BaseAddress = buffer;
pool->TotalSize = size;
// Create one giant initial block
Assert(size > sizeof(MemoryBlock));
MemoryBlock* block = static_cast<MemoryBlock*>(buffer);
block->Magic = MemoryBlock::kMagic;
block->Next = nullptr;
block->TotalSize = size;
block->Used = 0;
pool->FreeList = block;
}
} // namespace
MemoryArena* GetScratchArena()
{
return &g_ScratchArena;
}
MemoryArena* GetEngineArena()
{
return &g_EngineArena;
}
MemoryArena* GetGameArena()
{
return &g_GameArena;
@@ -197,17 +343,17 @@ namespace Juliet
void MemoryArenasInit()
{
// TODO: Use the VirtualAlloc API for this on windows
g_ScratchBacking = Malloc(kScratchSize);
MemSet(g_ScratchBacking, 0, kScratchSize);
g_EngineBacking = Malloc(kEngineSize);
MemSet(g_EngineBacking, 0, kEngineSize);
g_GameBacking = Malloc(kGameSize);
MemSet(g_GameBacking, 0, kGameSize);
g_ScratchBuffer = Malloc(kScratchSize);
g_EngineBuffer = Malloc(kEngineSize);
g_GameBuffer = Malloc(kGameSize);
MemoryArenaCreate(&g_ScratchArena, g_ScratchBacking, kScratchSize);
MemoryArenaCreate(&g_EngineArena, g_EngineBacking, kEngineSize);
MemoryArenaCreate(&g_GameArena, g_GameBacking, kGameSize);
InitPool(&g_ScratchMemory, g_ScratchBuffer, kScratchSize);
InitPool(&g_EngineMemory, g_EngineBuffer, kEngineSize);
InitPool(&g_GameMemory, g_GameBuffer, kGameSize);
MemoryArenaCreate(&g_ScratchArena, &g_ScratchMemory);
MemoryArenaCreate(&g_EngineArena, &g_EngineMemory);
MemoryArenaCreate(&g_GameArena, &g_GameMemory);
#if JULIET_DEBUG
UnitTest::TestMemoryArena();
@@ -216,8 +362,11 @@ namespace Juliet
void MemoryArenasShutdown()
{
SafeFree(g_ScratchBacking);
SafeFree(g_EngineBacking);
SafeFree(g_GameBacking);
// Technically we should free blocks?
// But since we own the giant buffers, we can just free them.
SafeFree(g_ScratchBuffer);
SafeFree(g_EngineBuffer);
SafeFree(g_GameBuffer);
}
} // namespace Juliet

View File

@@ -1,67 +1,76 @@
#include <Core/Common/CoreTypes.h>
#include <Core/Common/CoreUtils.h>
#include <Core/Memory/Allocator.h>
#include <Core/Memory/MemoryArena.h>
#include <cstdio>
#if JULIET_DEBUG
namespace Juliet::UnitTest
{
// Need access to internal Pool functions? They are in the header now!
// MemoryPool is declared in header.
void TestMemoryArena()
{
// 1. Core Arena Functionality
uint8 buffer[1024];
MemoryArena arena;
MemoryArenaCreate(&arena, buffer, 1024);
printf("Running Paged Memory Arena Tests...\n");
Assert(arena.Offset == 0);
Assert(arena.Size == 1024);
// Setup Pool and Arena for Pop Tests
size_t testPoolSize = Megabytes(1);
void* testBacking = Calloc(1, testPoolSize);
MemoryPool pool;
pool.BaseAddress = testBacking;
pool.TotalSize = testPoolSize;
pool.FreeList = nullptr;
void* p1 = ArenaPush(&arena, 100, 16, ConstString("Test"));
Assert(p1 != nullptr);
Assert(arena.Offset >= 100);
size_t marker = ArenaGetMarker(&arena);
void* p2 = ArenaPush(&arena, 200, 16, ConstString("Test"));
Assert(p2 != nullptr);
Assert(arena.Offset >= marker + 200);
ArenaResetToMarker(&arena, marker);
Assert(arena.Offset == marker);
ArenaReset(&arena);
Assert(arena.Offset == 0);
// 2. Alignment Test
void* p3 = ArenaPush(&arena, 1, 1, ConstString("Test"));
[[maybe_unused]] size_t addr = reinterpret_cast<size_t>(p3);
void* p4 = ArenaPush(&arena, 1, 16, ConstString("Test"));
size_t addr2 = reinterpret_cast<size_t>(p4);
Assert((addr2 % 16) == 0);
// 3. Template Helpers
struct TestData
// Initialize FreeList (Simulate pool)
size_t blockSize = Kilobytes(128);
size_t numBlocks = testPoolSize / blockSize;
uint8* ptr = static_cast<uint8*>(testBacking);
for (size_t i = 0; i < numBlocks; ++i)
{
int a;
float b;
};
TestData* data = ArenaPushType<TestData>(&arena, ConstString("Test"));
Assert(data != nullptr);
data->a = 10;
data->b = 20.0f;
MemoryBlock* blk = reinterpret_cast<MemoryBlock*>(ptr + i * blockSize);
blk->Magic = MemoryBlock::kMagic;
blk->TotalSize = blockSize;
blk->Used = 0;
blk->Next = pool.FreeList;
pool.FreeList = blk;
}
TestData* dataArray = ArenaPushArray<TestData>(&arena, 10, ConstString("Test"));
Assert(dataArray != nullptr);
MemoryArena arena;
MemoryArenaCreate(&arena, &pool);
// 4. Scratch Arena
MemoryArena* scratch = GetScratchArena();
Assert(scratch != nullptr);
void* sp = ArenaPush(scratch, 100, 16, ConstString("Test"));
Assert(sp != nullptr);
ScratchArenaReset();
Assert(scratch->Offset == 0);
// 5. Arena Pop
// Align sizes to 16 to avoid padding issues during Pop
void* pop1 = ArenaPush(&arena, 5008, 16, ConstString("Pop1"));
printf("All MemoryArena tests passed.\n");
void* pop2 = ArenaPush(&arena, 208, 16, ConstString("Pop2"));
// Pop Middle (Should Fail)
bool res1 = ArenaPop(&arena, pop1, 5008);
Assert(res1 == false);
// Pop Top (Should Success)
bool res2 = ArenaPop(&arena, pop2, 208); // 200->208
Assert(res2 == true);
// Verify Used space is back to pop1 end
Assert(arena.CurrentBlock->Used == ArenaGetMarker(&arena).Offset); // This usage of GetMarker is valid if marker was implicit?
// Actually we didn't take a marker.
// We can verify by allocating pop3. It should overwrite pop2 location.
void* pop3 = ArenaPush(&arena, 16, 16, ConstString("Pop3"));
Assert(pop3 == pop2);
// Cleanup popped items from stack logic for reset...
// Pop pop3
ArenaPop(&arena, pop3, 16);
// Pop pop1
ArenaPop(&arena, pop1, 5008);
Assert(arena.CurrentBlock->Used == 0); // Should be effectively 0
printf("[Success] Arena Pop\n");
// Cleanup
SafeFree(testBacking);
printf("All Paged MemoryArena tests passed.\n");
}
} // namespace Juliet::UnitTest
#endif

View File

@@ -12,53 +12,62 @@ namespace Juliet::Debug
{
if (ImGui::CollapsingHeader(CStr(name), ImGuiTreeNodeFlags_DefaultOpen))
{
float progress = 0.0f;
if (arena.Size > 0)
// Calculate Stats
size_t totalCapacity = 0;
size_t totalUsed = 0;
size_t blockCount = 0;
MemoryBlock* curr = arena.FirstBlock;
while (curr)
{
progress = (float)arena.Offset / (float)arena.Size;
totalCapacity += curr->TotalSize;
totalUsed += curr->Used;
blockCount++;
curr = curr->Next;
}
float progress = 0.0f;
if (totalCapacity > 0)
{
progress = (float)totalUsed / (float)totalCapacity;
}
char overlay[64];
sprintf_s(overlay, "%zu / %zu bytes", arena.Offset, arena.Size);
sprintf_s(overlay, "%zu / %zu bytes (%zu blocks)", totalUsed, totalCapacity, blockCount);
ImGui::ProgressBar(progress, ImVec2(0.0f, 0.0f), overlay);
#if JULIET_DEBUG
ImGui::PushID(CStr(name));
if (ImGui::TreeNode("Allocations"))
if (ImGui::TreeNode("Blocks"))
{
size_t displayedSize = 0;
// Draw allocations as a list for now
if (ImGui::BeginTable("AllocationsTable", 3, ImGuiTableFlags_Borders | ImGuiTableFlags_RowBg | ImGuiTableFlags_Resizable))
if (ImGui::BeginTable("BlocksTable", 3, ImGuiTableFlags_Borders | ImGuiTableFlags_RowBg | ImGuiTableFlags_Resizable))
{
ImGui::TableSetupColumn("Tag");
ImGui::TableSetupColumn("Size");
ImGui::TableSetupColumn("Offset");
ImGui::TableSetupColumn("ID");
ImGui::TableSetupColumn("Used");
ImGui::TableSetupColumn("Capacity");
ImGui::TableHeadersRow();
for (size_t i = 0; i < arena.AllocationCount; ++i)
size_t idx = 0;
curr = arena.FirstBlock;
while (curr)
{
const auto& alloc = arena.Allocations[i];
ImGui::TableNextRow();
ImGui::TableSetColumnIndex(0);
ImGui::Text("%s", CStr(alloc.Tag));
ImGui::Text("%zu", idx++);
ImGui::TableSetColumnIndex(1);
ImGui::Text("%zu", alloc.Size);
ImGui::Text("%zu", curr->Used);
ImGui::TableSetColumnIndex(2);
ImGui::Text("%zu", alloc.Offset);
displayedSize += alloc.Size;
ImGui::Text("%zu", curr->TotalSize);
curr = curr->Next;
}
ImGui::EndTable();
}
ImGui::Text("Total Tracked Size: %zu", displayedSize);
ImGui::Text("Untracked/Padding: %zu", arena.Offset - displayedSize);
ImGui::TreePop();
}
ImGui::PopID();
#else
ImGui::Text("Detailed allocation tracking disabled in Release build.");
#endif
}
}
} // namespace

View File

@@ -1,3 +1,3 @@
@echo off
call misc\shell.bat
rem call misc\shell.bat
fbuild %* -cache