Understanding the Large Object Heap in .NET
Most .NET developers know the garbage collector exists and roughly what it does. Fewer know that objects above a certain size are handled by a completely separate allocator with different collection rules — one that can silently cause memory pressure, fragmentation, and GC pauses that don't show up the way you'd expect.
This is the Large Object Heap (LOH), and understanding it is worth the 10 minutes.
What is the LOH?
The .NET GC divides objects into two heaps based on size. Objects smaller than 85,000 bytes go on the Small Object Heap (SOH), which is generational — it has Gen0, Gen1, and Gen2. Objects 85,000 bytes or larger go on the Large Object Heap.
The LOH is non-generational. There is no Gen0/Gen1 equivalent — every LOH object is treated as Gen2 from the moment it's allocated. This has a direct consequence: LOH objects are only collected during a full Gen2 GC, which is expensive and relatively infrequent.
The 85 KB threshold applies to the object itself. A byte[] of 85,000 bytes or more lands on the LOH. So does a double[] of 10,625 elements or more (10,625 × 8 bytes = 85,000). Strings and arrays are the most common culprits since they're the only non-fixed-size types you allocate routinely.
Why the LOH is Treated Differently
The reason the LOH exists as a separate heap is performance — specifically, the cost of compaction.
On the SOH, the GC compacts after collection: it moves surviving objects together, closing gaps left by collected objects. This is fast for small objects but prohibitively expensive for large ones. Moving a 100 MB buffer means copying 100 MB of memory and updating every reference to it.
So by default, the LOH is not compacted. After a collection, the GC marks dead LOH objects as free space and adds them to a free list, but it doesn't move the surviving objects together. This is fast, but it means the heap can fragment over time — the free list has gaps, and a new large allocation may not fit into any of them even if total free space is sufficient.
Fragmentation in Practice
Fragmentation becomes a problem when you have mixed lifetimes — some large objects that are long-lived (pinned buffers, large caches) and some that are short-lived (response buffers, temporary arrays). The short-lived ones leave gaps that may not match the size of future allocations.
The symptom is usually one of:
- Memory use growing beyond what you'd expect from live objects
GC.GetTotalMemory(false)returning a much smaller number than the process's working set- Long GC pause times on Gen2 collections, since the GC still has to walk the entire LOH free list
You can measure fragmentation directly:
var info = GC.GetGCMemoryInfo();
Console.WriteLine($"Heap size: {info.HeapSizeBytes / 1024 / 1024} MB");
Console.WriteLine($"Fragmented bytes: {info.FragmentedBytes / 1024 / 1024} MB");
Console.WriteLine($"Fragmentation ratio: {(double)info.FragmentedBytes / info.HeapSizeBytes:P1}");
A fragmentation ratio above 20–30% on the LOH is worth investigating.
Forcing LOH Compaction
As of .NET 4.5.1, you can request a compacting collection:
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced, blocking: true);
CompactOnce resets to Default (no compaction) after the next Gen2 collection, so you'd need to set it again each time. This is a blocking, stop-the-world operation — appropriate for a scheduled maintenance window or application startup, not a hot path.
Practical Mitigation Strategies
The better answer is usually to reduce LOH pressure in the first place rather than relying on compaction.
Use ArrayPool<T> for temporary large buffers. ArrayPool rents and returns arrays without allocating, keeping them out of the LOH entirely:
var pool = ArrayPool<byte>.Shared;
byte[] buffer = pool.Rent(minimumLength: 100_000);
try
{
// use buffer
}
finally
{
pool.Return(buffer);
}
The rented array may be larger than minimumLength — always track the actual size you need separately.
Use MemoryPool<T> or IMemoryOwner<T> when you need to pass ownership of a buffer across async boundaries. These integrate with Span<T> and Memory<T>, which work well in pipeline-style code:
using IMemoryOwner<byte> owner = MemoryPool<byte>.Shared.Rent(100_000);
Memory<byte> memory = owner.Memory;
await ProcessAsync(memory);
// owner.Dispose() returns the memory to the pool
Pre-allocate long-lived large objects at startup rather than letting them come and go. If you need a large buffer for the lifetime of the application, allocate it once during initialization. Allocating it repeatedly creates short-lived LOH objects and fragments the heap faster.
Avoid string concatenation that builds large strings incrementally. Each intermediate string is a separate allocation. Use StringBuilder or string.Create for large strings, and be mindful that a string over ~42,500 characters (UTF-16, 2 bytes per char) will land on the LOH.
What to Look For in Application Insights / PerfView
When diagnosing LOH-related pressure in production:
- Watch
Gen 2 GC countandGen 2 GC pause time. LOH collections piggyback on Gen2, so elevated Gen2 activity is the first signal. - The
Large Object Heap Sizeperformance counter (available viadotnet-counters) shows current LOH size. - PerfView's GC heap snapshot will show you exactly which types are consuming LOH space.
dotnet-counters monitor --counters System.Runtime --process-id <pid>
Look for gc-heap-size and correlate spikes with request patterns.
Summary
The LOH isn't something you interact with directly, but it shapes memory behaviour in ways that can surprise you. The key points:
- Objects ≥ 85 KB go on the LOH and are only collected in Gen2 GCs
- The LOH is not compacted by default, so fragmentation accumulates over time
ArrayPool<T>andMemoryPool<T>are the primary tools for keeping large buffers off the LOHGCSettings.LargeObjectHeapCompactionModeexists when you need to force compaction, but it's a last resort
Most LOH problems are invisible until they're not — a service that runs fine for hours and then starts taking long GC pauses, or one whose memory use keeps climbing despite seemingly low allocation rates. Measuring fragmentation early, before it's a production issue, is cheaper than diagnosing it under pressure.