Skip to content

Conversation

kixorz
Copy link

@kixorz kixorz commented Aug 26, 2025

Limit Redis memory usage to 25% of system RAM and enable eviction policy to prevent OOM.

Summary

This PR configures Redis in our Docker Compose deployment to use a bounded amount of memory (25% of the container’s available RAM) and enables an eviction policy. This reduces the risk of Redis consuming all available memory, triggering out-of-memory (OOM) conditions, and forcing costly instance size increases in cloud environments.

What changed

  • Updated the Redis service command to compute a maxmemory value at container start and pass it to redis-server.
  • Set the eviction policy to volatile-lru so Redis evicts the least recently used keys that have an expiration set when under memory pressure.

Why this is needed

  • Unlimited Redis memory leads to unbounded growth: Redis holds data in memory; without a cap, it can grow until the host/container runs out of memory.
  • OOM crashes and instability: When Redis (or the container) exhausts memory, the kernel may OOM-kill processes, causing downtime and data loss (in-memory datasets) or cascading failures in dependent services.
  • Cloud cost pitfalls: The usual band-aid for OOM is to permanently bump instance sizes (more RAM). That’s expensive and scales poorly as workload grows. Setting a sane cap plus an eviction policy keeps memory predictable and avoids unnecessary instance class upgrades.

How it works

  • On container startup, we read total memory from /proc/meminfo and compute 25% for --maxmemory.
  • We set --maxmemory-policy volatile-lru to evict the least recently used keys among those with TTLs when approaching the memory cap.

Impact

  • Predictable memory footprint for Redis within the container.
  • Reduced risk of host-level OOM and improved overall stability.
  • Some non-expiring keys won’t be evicted by volatile-lru. If the dataset is dominated by non-expiring keys, you may want a different policy (see below).

Configuration and overrides

  • Default behavior: 25% of container RAM, eviction policy volatile-lru.
  • To change the allocation fraction: edit the compose command expression (e.g., use /3 or /2 instead of /4).
  • To change eviction policy: replace --maxmemory-policy volatile-lru with one of Redis’s supported policies (e.g., allkeys-lru, volatile-ttl, allkeys-random, noeviction, etc.).
  • If you prefer an explicit fixed cap, replace the computed value with a static byte value, for example: --maxmemory 2gb.

Risks and trade-offs

  • Evictions under memory pressure: If Redis reaches the cap, keys will be evicted per the chosen policy. Applications relying on retained cache entries should tolerate misses.

Legal Boilerplate

Look, I get it. The entity doing business as "Sentry" was incorporated in the State of Delaware in 2015 as Functional Software, Inc. and is gonna need some rights from me in order to utilize my contributions in this here PR. So here's the deal: I retain all rights, title and interest in and to my contributions, and by keeping this boilerplate intact I confirm that Sentry can use, modify, copy, and redistribute my contributions, under Sentry's choice of terms.

kixorz added 2 commits August 26, 2025 18:15
…etting max memory using container's available memory. Removed unused `redis.conf`.
@aminvakil
Copy link
Collaborator

Applications relying on retained cache entries should tolerate misses.

Are we sure this is true for self-hosted?

Also MemTotal / 4 in our minimum requirement setup which is 16GB RAM + 16GB Swap would be 4GB. So we have to be sure redis does not use more than that.

I checked two self-hosted instances and one of them is using 20MB and the other is using 40MB :)

@aminvakil
Copy link
Collaborator

@kixorz Which version is self-hosted are you on? I remember we had a memory leak problem with redis about two or three years ago, but it got fine after a couple of releases, unfortunately I cannot remember the exact version.

And I was wondering why your redis instance got OOMed?

@kixorz
Copy link
Author

kixorz commented Aug 26, 2025

Hey, thanks for the question. We ran out of disk space, Sentry stopped working. When we rebooted, redis allocated all available memory on the system and the machine hung. We had to double the RAM to make it work again. After it started to work, we modified the RAM back. Redis exhausted the memory again when it was trying to start back up. This is because its disk database is already as big as the highest RAM value from the previous start.

Copy link
Collaborator

@aminvakil aminvakil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think redis usage is relevant to total memory of server, so there is no need for it to be 1 / 4 of total memory.

On another note some users may have changed their redis.conf locally and this change breaks them.

It'll be nice if we have a estimation of how redis uses memory and set a maxmemory suitable for that usage, for example if self-hosted redis is using 2GB memory at max for example, we can just set maxmemory to 4GB.

@aminvakil
Copy link
Collaborator

Also copying this from original PR which created redis.conf:

The default value of the maxmemory setting in Redis depends on the system architecture:

64-bit Systems: By default, there is no limit on memory usage. This allows Redis to utilize as much RAM as the operating system permits until it runs out of available memory.
32-bit Systems: The implicit memory limit is typically set to 3GB, which is a consequence of the limitations inherent in 32-bit addressing.

I believe we can set the default value to unlimited, allowing users to adjust this setting as needed based on their specific requirements.

Originally posted by @Hassanzadeh-sd in #3427 (comment)

@kixorz
Copy link
Author

kixorz commented Aug 28, 2025

The problem is that with maxmemory 0 there is no cap on redis memory use. There are situations where redis will use all available memory - for example after the system runs out of disk space. This hangs the system.

If the solution is to modify the conf file, feel free to close this.
I think having the two params in the compose file is more elegant.

@aminvakil
Copy link
Collaborator

The problem is that with maxmemory 0 there is no cap on redis memory use. There are situations where redis will use all available memory - for example after the system runs out of disk space. This hangs the system.

It's fine to set an appropriate limit on redis maxmemory if we know how it uses over time.

If the solution is to modify the conf file, feel free to close this. I think having the two params in the compose file is more elegant.

We can change this PR to set the limit in redis.conf if that's OK with you. Let's see what maintainers think about this before you put work onto it though.

@BYK
Copy link
Member

BYK commented Sep 2, 2025

👎🏻 on this. We should just let people change the redis config.

@kixorz if you feel strongly about the env variable you can also use docker.compose.override.yml file to override the file per your changes and use an env variable in your own installation.

@aminvakil
Copy link
Collaborator

👎🏻 on this. We should just let people change the redis config.

@kixorz if you feel strongly about the env variable you can also use docker.compose.override.yml file to override the file per your changes and use an env variable in your own installation.

@BYK How do you feel about a default redis maxmemory instead of 0?

@BYK
Copy link
Member

BYK commented Sep 2, 2025

@BYK How do you feel about a default redis maxmemory instead of 0?

Don't have enough ops expertise to make an intelligent comment about this 😅

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

3 participants