跳至主要内容

Java memory analyzer

Origin

When I optimize a mysql pagination interface , I want ot use Redis cache to store some result to speed it up. In order to estimate how much memory I will spend, I have to do some memory estimation.

Solution

This solution comes from this stackoverflow question.

Steps

Write the following source:

package memory;

import java.lang.instrument.Instrumentation;

public class ObjectSizeFetcher {
    private static Instrumentation instrumentation;

    public static void premain(String args, Instrumentation inst) {
        instrumentation = inst;
    }

    public static long getObjectSize(Object o) {
        return instrumentation.getObjectSize(o);
    }
}

Add the following line to your MANIFEST.MF:

Premain-Class: memory.ObjectSizeFetcher

Use getObjectSize:

package memory;

public class C {
    private int x;
    private int y;

    public static void main(String [] args) {
        System.out.println(
        ObjectSizeFetcher.getObjectSize(new C()));
    }
}

Build your jar as you have done before or use:

jar cvfm ObjectSizeFetcherAgent.jar MANIFEST.MF input_file`

Invoke with:

java -javaagent:ObjectSizeFetcherAgent.jar memory.C

Runnable source can be found here

Other solution

Limitation

As comments of above stackoverflow question pointed out, above solution has some limits: it just count reference size for Object not the size in heap.
For example, for the following class, program all give 16:

private static class JustString {
    private String a = "test";
}

private static class ByteArray {
    private byte[] a = new byte[128];
}

private static class JustObject {
    private Object o = new Object();
}

private static class JustList {
    private List<A> l = new ArrayList<>();
}

So we move to more productive solution

Solution from netty

When skim the source of netty, I find a thread pool executor named MemoryAwareThreadPoolExecutor. Interested in this class, I find out the source and read the following doc:

When a task (i.e. Runnable) is submitted, MemoryAwareThreadPoolExecutor calls ObjectSizeEstimator.estimateSize(Object) to get the estimated size of the task in bytes to calculate the amount of memory occupied by the unprocessed tasks.
If the total size of the unprocessed tasks exceeds either per-Channel or per-Executor threshold, any further execute(Runnable) call will block until the tasks in the queue are processed so that the total size goes under the threshold.

This executor block job submission when exceed memory limit and memory analyzer is a crutial part. We can find a implementation for ObjectSizeEstimator in source
DefaultObjectSizeEstimator, and we will analyze how it implemented.
First, it store the size of primitives:

public DefaultObjectSizeEstimator() {
    class2size.put(boolean.class, 4); // Probably an integer.
    class2size.put(byte.class, 1);
    class2size.put(char.class, 2);
    class2size.put(int.class, 4);
    class2size.put(short.class, 2);
    class2size.put(long.class, 8);
    class2size.put(float.class, 4);
    class2size.put(double.class, 8);
    class2size.put(void.class, 0);
}

Then, it analyze this class recursively for every field:

visitedClasses.add(clazz);

int answer = 8; // Basic overhead.
for (Class<?> c = clazz; c != null; c = c.getSuperclass()) {
    Field[] fields = c.getDeclaredFields();
    for (Field f : fields) {
        if ((f.getModifiers() & Modifier.STATIC) != 0) {
            // Ignore static fields.
            continue;
        }

        answer += estimateSize(f.getType(), visitedClasses);
    }
}

visitedClasses.remove(clazz);

And then align to 8 byte:

// Some alignment.
answer = align(answer);

Finally some special handler for some class:

...
} else if (o instanceof byte[]) {
    answer += ((byte[]) o).length;
} else if (o instanceof ByteBuffer) {
    answer += ((ByteBuffer) o).remaining();
} else if (o instanceof CharSequence) {
    answer += ((CharSequence) o).length() << 1;
} else if (o instanceof Iterable<?>) {
    for (Object m : (Iterable<?>) o) {
        answer += estimateSize(m);
    }
}

What should be noticed is this class is designed for netty using which omit many special cases( For example, it omit the handle process for a int array.) and not suitable for common usage.

But most of the code can be used to customize your version of memory analyzer.

Online source code of DefaultObjectSizeEstimator

Ref

Written with StackEdit.

评论

此博客中的热门博文

Spring Boot: Customize Environment

Spring Boot: Customize Environment Environment variable is a very commonly used feature in daily programming: used in init script used in startup configuration used by logging etc In Spring Boot, all environment variables are a part of properties in Spring context and managed by Environment abstraction. Because Spring Boot can handle the parse of configuration files, when we want to implement a project which uses yml file as a separate config file, we choose the Spring Boot. The following is the problems we met when we implementing the parse of yml file and it is recorded for future reader. Bind to Class Property values can be injected directly into your beans using the @Value annotation, accessed via Spring’s Environment abstraction or bound to structured objects via @ConfigurationProperties. As the document says, there exists three ways to access properties in *.properties or *.yml : @Value : access single value Environment : can access multi...

LevelDB Source Reading (4): Concurrent Access

In this thread, we come to the issue of concurrent access of LevelDB. As a database, it can be concurrently accessed by users. But, it wouldn’t be easy to provide high throughput under product load. What effort does LevelDB make to achieve this goal both in design and implementation? Goal of Design From this github issue , we can see LevelDB is designed for not allowing multi-process access. this (supporting multiple processes) doesn’t seem like a good feature for LevelDB to implement. They believe let multiple process running would be impossible to share memory/buffer/cache, which may affect the performance of LevelDB. In the case of multiple read-only readers without altering the code base, you could simply copy the file for each reader. Yes, it will be inefficient (though not on file systems that dedupe data), but then again, so would having multiple leveldb processes running as they wouldn’t be able to share their memory/buffer/etc. They achieve it by adding a l...

LevelDB Source Reading (3): Compaction

LevelDB Source Reading (3): Compaction In the last blog that analyzes read/write process of Leveldb, we can see writing only happens to log file and memory table, then it relies on the compaction process to move the new updates into persistent sorted table for future use. So the compaction is a crucial part for the design, and we will dive into it in this blog. Compaction LevelDB compacts its underlying storage data in the background to improve read performance. The upper sentence is cited from the document of Leveldb , and we will see how it is implemented via code review. Background compaction // db_impl.cc void DBImpl :: MaybeScheduleCompaction ( ) { // use background thread to run compaction env_ - > Schedule ( & DBImpl :: BGWork , this ) ; } Two main aspects // arrange background compaction when Get, Open, Write void DBImpl :: BackgroundCompaction ( ) { // compact memtable CompactMemTable ( ) ; // compact ...