Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Significance of absolute accurate used_memory? #467

Open
lipzhu opened this issue May 8, 2024 · 7 comments
Open

Significance of absolute accurate used_memory? #467

lipzhu opened this issue May 8, 2024 · 7 comments

Comments

@lipzhu
Copy link
Contributor

lipzhu commented May 8, 2024

The problem/use-case that the feature addresses

Currently, when call the zmalloc/zfree related functions, it will call the zmalloc_size and atomic operations to update used_memory, these are all costly operations, is it worth to put such expensive operation in a frequently low API?

Some pull requests are submitted to optimize for such kind of this issue.

#308
#453

Alternatives you've considered

Maybe we can consider to remove the absolute accurate property of used_memory to trade off between the performance?

@zuiderkwast
Copy link
Contributor

I think it's safe to remove this exact counting. It's already not exact anyway (see discussion in the PR).

In the contributor summit, someone mentioned that we should remove the memory counter in zmalloc and instead rely on the metrics from jemalloc's mallctl(). It does the same accounting as we do anyway, except that it also includes allocations done without zmalloc, which is a better metric to use when checking the maxmemory limit IMHO.

  • We can get allocated memory using mallctl("stats.allocated").
  • If that's too expensive, it's possible to access a pointer (uint64_t *) to jemalloc's own thread local counters using mallctl("thread.allocatedp") and mallctl("thread.deallocatedp").

We can optimize the balance between exactness and performance using some heuristics.

The worst case scenario is that two threads allocates a huge amount of memory almost simultaneously when we're close to the maxmemory limit. To avoid that, we could fetch the metric more frequently when we're closer to the maxmemory, say when we're over 90% of maxmemory, and less often otherwise.

Another possible heuristic is to count large allocations immediately (say over 10MB) and increment the global counter immediately from zmalloc in this case, and otherwise rely on the value we fetch from jemalloc less often, e.g. in cron.

@PingXie
Copy link
Member

PingXie commented Jun 13, 2024

@zuiderkwast, did you get a chance to look at @lipzhu's proposal at #308 (comment)? The idea is to cache the small delta locally and only when the accumulated changes exceed some threshold commit them to the global variable atomically . On the reporting path, we could sum up the global and the deltas from all threads to get a close enough reading. Note that the local delta would need to be a signed number.

I have a few questions about using mallctl("stats.allocated") or a variant of it as you described above

  1. What is the overhead?
  2. What about other allocators?

@zuiderkwast
Copy link
Contributor

@PingXie Yes, I looked at the PR but it isn't currently implemented as in the #308 (comment) comment, right? I think this comment looks good and it is more simple than the current array of counters per thread.

@lipzhu Do I understand correctly? Is the suggestion to compute the diff and report back to the main atomic used_memory only when it changed more than 100KB (etc.), like the code below? If yes, then I think there's no point in using jemalloc's stats.

static _Atomic size_t used_memory = 0;

static _Thread int64_t thread_used_memory_delta = 0;

#define THREAD_MEM_MAX_DELTA (100 * 1024)

static inline void update_zmalloc_stat_alloc(size_t size) {
    thread_used_memory_delta += size;
    if (thread_used_memory_delta >= THREAD_MEM_MAX_DELTA) {
        atomic_fetch_add_explicit(&used_memory, thread_used_memory_delta, memory_order_relaxed);
        thread_used_memory_delta = 0;
    }
}

static inline void update_zmalloc_stat_free(size_t size) {
    thread_used_memory_delta -= size;
    if (thread_used_memory_delta <= THREAD_MEM_MAX_DELTA) {
        atomic_fetch_sub_explicit(&used_memory, -thread_used_memory_delta, memory_order_relaxed);
        thread_used_memory_delta = 0;
    }
}

So maybe there's no point using jemalloc's stats, but I'm answering the questions anyway, Ping:

I have a few questions about using mallctl("stats.allocated") or a variant of it as you described above

  1. What is the overhead?

I think mallctl("stats.allocated") has the overhead of at least one function call, which is maybe too much here.

But if we use mallctl("thread.allocatedp") and mallctl("thread.deallocatedp") instead, they are called only during initalization to get a pointer to jemalloc's own counters. These are just thread local uint64_t variables, that we can access. One for allocated and one for deallocated memory.

  1. What about other allocators?

We would need fallback to count this ourselves, but it can be conditional (ifdef) so no cost if jemalloc is used.

@PingXie
Copy link
Member

PingXie commented Jun 13, 2024

@PingXie Yes, I looked at the PR but it isn't currently implemented as in the #308 (comment) comment, right? I think this comment looks good and it is more simple than the current array of counters per thread.

No. I don't think @lipzhu has implemented it yet. But yes your implementation is what I like to see and 100 KB (or 128 KB?) seems good to me too.

@PingXie
Copy link
Member

PingXie commented Jun 14, 2024

Actually I wonder if we should go even higher, like 1MB.

@lipzhu
Copy link
Contributor Author

lipzhu commented Jun 14, 2024

@zuiderkwast @PingXie Thanks for your comments.

Let me clarify the decision according to your comments in this issue and pr. Please correct me if I misunderstood.

  1. if not defined jemalloc, prefer proposal in Introduce Thread-local storage variable to reduce atomic contention when update used_memory metrics. #308 (comment) to track the used_memory, threshold is 1MB.
  2. If defined jemalloc, just return used_memory by je_mallctl("stats.allocated") , thus we can save the cost of update_zmalloc_stat_alloc/free and je_malloc_usable_size when call zmalloc/zfree.

@PingXie
Copy link
Member

PingXie commented Jun 14, 2024

I think we are recommending just option 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants