Skip to content

Memory limits via allocation sampling #66

@SquidDev

Description

@SquidDev

One of Cobalt's weaker points is that it does not impose any limits on the amount of memory the VM can use. Ideally CC: Tweaked would switch over to a more native-style VM which does support this (see cc-tweaked/CC-Tweaked#769), but I think that's a long way away.

Unfortunately, it is impractical to track every single allocation - this would make the implementation significantly more complex, and incur a massive overhead.

One alternative idea, inspired by this OCaml package (though perhaps obvious in retrospect) is to monitor a small sample of our allocations, and estimate actual memory usage based on those. To further simplify things, I propose we only track array allocations: memory usage will be higher than our estimate, but it should still be bounded by some constant factor (2-3x?).

Implementation

  • Add some Allocator class, which provides a newTypeArray(LuaState, int size) method for the various core types (byte, LuaValue, a generic T), as well as a corresponding resizeArray.
  • The LuaState is augmented with three fields:
    • int allocationCounter: Tracks how many allocations are left before we take another sample.
    • final long maxMemory: The maximum memory we can allocate.
    • AtomicLong currentMemory: The current memory. Note this needs to be atomic as we'll decrement it from another thread.
  • When allocating an array, we compute the size of this array in bytes. If the size of the array is larger than a constant (16KiB?) or if decrementing the allocation counter would take it to < 0, then:
    • If this allocation would take us above the maxMemory, then error.
    • Otherwise, increment currentMemory and add this object to a queue of WeakReferences.
    • Update allocationCounter to be a random number between 0 and 2 * our sampling rate (probably 1k). This provides a very basic form of abuse mitigation, by making which allocations are sampled non-deterministic.
  • This reference queue is polled on a separate thread (it can be shared across all Lua VMs). Each WeakReference stores its original size and a reference to the owner's currentMemory. When the weak reference is poled, we decrement its owner's memory.

Concerns

The main concern here is this is heavily tied to Java's GC. It's possible the Lua VM could no longer hold a reference to a large object, but the GC hasn't got to it yet, so the currentMemory is still large.

It might be safer to set the max memory to something arbitrarily high (1GiB?) and expose the memory usage via a metric. This way we can get a better idea of the current behaviour before doing anything more drastic.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions