跳至主要内容

Syncer Performance Tuning

Syncer Performance Tuning

Recently, I re-read some chapter of Programming Pearls about the performance tuning. For notes about detailed basic principles to optimize performance, you may like to refer to this post. In those chapters, author concludes some optimization levels to check when performance becomes a problem:

  • Design level (System architecture): run in parallel, do in async way etc;
  • Algorithm level: space and time tradeoff; better algorithm & ADT;
  • Code tuning: loop unrolling; lock opt;
  • System Software
    • JVM level: garbage collector chosen; command line argument tuning;
    • OS level: network config; IO tuning;
  • Hardware

Now, we will apply above checklist of optimization levels to a specific project – syncer (syncer is a tool to sync & manipulate data from Mysql/MongoDB to Elasticsearch/Mysql/Http Endpoint) – to put the knowledge into practice and get better ideas about performance tuning.

Prepare Data & Env

The architecture of syncer is relative reasonable and fixed (see this blog post, if you want for an architecture overview: producer & consumer + pipe & filter), so we will skip this part. There is no complex algorithms existing in syncer, and we will also not going to dive into this. So, we will start from code tuning.

In order to do code tuning, we used some tools: to generate random data for test, to profile & tuning code.

  • Go script: generate 10810^8 lines of CSV datas to use;
  • JProfiler: a powerful tool which can profile CPU/Memory, and Lock/JDBC etc, an example usage can be found here;
  • Docker: to run a clean Mysql/Elasticsearch container for testing;

After all of the preparation task (install JProfiler and start application using its Idea plugin, generate data, pull image and config MySQL binlog & ES), I start the syncer to listening the change of MySQL data and run import script to insert to & delete from MySQL.

At the main panel of JProfiler, we can see the overview of application: Memory, CPU, threads, GC activities and classes. The first thing that I notices is there existing many waiting/blocked threads.

Thread

Thread Overview

We can choose the Threads->Thread Monitor panel, then we can see the details and find some unknown threads. After some simple guess and search, we can find those threads are threads for MongoDB communication. Through the help of thread creation stack trace, we found that those threads are auto-configed by Spring Boot accidentally (So, we exclude those auto config and save those resources).

Besides this, we also find some thread with bad naming – pool-3-thread-1, and adding a customized thread factory fix the problem.

enter image description here

GC & Memory

Sometimes later, we noticed that the memory usage is very high and GC activity rate is over 90%. Soon, the application crashed with:

Exception in thread "blc-192.168.1.204:3307" java.lang.OutOfMemoryError: Java heap space

Wondering whether there exists some kind of memory leak, we enable the GC logging with the following flags, we can see many Full GC happening:

java -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps 

8.484: [Full GC (Metadata GC Threshold) [PSYoungGen: 9524K->0K(190464K)] [ParOldGen: 9644K->14355K(138752K)] 19168K->14355K(329216K), [Metaspace: 34387K->34387K(1079296K)], 0.0434990 secs] [Times: user=0.10 sys=0.00, real=0.05 secs] 

PS stands for Parallel Scavenging for young generation and ParOldGen means Concurrent Old Gen (Mark Sweep) GC. In order to view the details of heap usage and allocation to see where is the bottleneck, we need to use JProfiler’s Heap walker tab (Notice the Live memory tab can only see the objects & allocation call related information, but not the reference relationships).

After taking a HPORF heap snapshot, we found a memory usage table. Use the reference analysis of JProfiler, we found that many SyncData item is stored in queue between the input and filter module. This means there is no real memory leaks, but exists some performance problem, which leads to memory problem.
enter image description here

So, we do the following checklist:

  • Check output speed: is Elasticsearch has too much index & shards?
  • No real memory leak, but can we reduce memory footprint?
  • Performance problem:
    • Why slow: data input rate fast then handling speed
    • Solution: Increase handling speed

Opt Memory

  • Reduce map size: we found some used map, which can be initiated when necessary;
    enter image description here

  • Exclude not used thread: like above description.

  • Use shared var when possible: we reuse an object without state rather than initiate every time.

    • Reuse & release StandardEvaluationContext: we use ThreadLocal to share the context when possible to save allocation; we release context when evaluation finished to release memory earlier (rather than waiting to send to output target).
  • Opt data id length: we found String occupy many memory and we follow the reference finding the id field. In order to save the memory, we change the encoding of id and save about 1/31/3 id length.

Through many small optimization, we finally reduce about 50% of memory usage when do pressure test.

CPU & Speed

In order to improve speed of event handling, we need to find the hot spot first. We first need to start CPU recording, then we can see hot spots method in CPU views -> Hot Spots:

Hotspot

By sorting the methods via Average Time, we can easily find target, then we click to see where those methods is called and make some optimizations:

  • Arrays.toString(..): Test level before log large entity (By the way, use a lambda here may be better, but slf4j is not supported for the time being):
if (logger.isDebugEnabled()) {  
  logger.debug("Receive binlog event: {}", Arrays.toString(someArray)); 
}
  • Event.toString(): Log without call toString() to avoid method invocation if log level doesn’t met;
  • ArrayList.ensureCapacityInternal(..)
    • Init ArrayList with size rather than grow and copy;
    • Replace ArrayList with LinkedList;
  • SpelExpressionParser.parseExpression(..): Not parse expression every time, reuse parse result;

Conclusion

Through a serial of performance tuning, we finally decrease the memory usage about 50%, and improve 75% output rate. Hope it is helpful to you also.

Ref

Written with StackEdit.

评论

此博客中的热门博文

Spring Boot: Customize Environment

Spring Boot: Customize Environment Environment variable is a very commonly used feature in daily programming: used in init script used in startup configuration used by logging etc In Spring Boot, all environment variables are a part of properties in Spring context and managed by Environment abstraction. Because Spring Boot can handle the parse of configuration files, when we want to implement a project which uses yml file as a separate config file, we choose the Spring Boot. The following is the problems we met when we implementing the parse of yml file and it is recorded for future reader. Bind to Class Property values can be injected directly into your beans using the @Value annotation, accessed via Spring’s Environment abstraction or bound to structured objects via @ConfigurationProperties. As the document says, there exists three ways to access properties in *.properties or *.yml : @Value : access single value Environment : can access multi

Elasticsearch: Join and SubQuery

Elasticsearch: Join and SubQuery Tony was bothered by the recent change of search engine requirement: they want the functionality of SQL-like join in Elasticsearch! “They are crazy! How can they think like that. Didn’t they understand that Elasticsearch is kind-of NoSQL 1 in which every index should be independent and self-contained? In this way, every index can work independently and scale as they like without considering other indexes, so the performance can boost. Following this design principle, Elasticsearch has little related supports.” Tony thought, after listening their requirements. Leader notice tony’s unwillingness and said, “Maybe it is hard to do, but the requirement is reasonable. We need to search person by his friends, didn’t we? What’s more, the harder to implement, the more you can learn from it, right?” Tony thought leader’s word does make sense so he set out to do the related implementations Application-Side Join “The first implementation

Implement isdigit

It is seems very easy to implement c library function isdigit , but for a library code, performance is very important. So we will try to implement it and make it faster. Function So, first we make it right. int isdigit ( char c) { return c >= '0' && c <= '9' ; } Improvements One – Macro When it comes to performance for c code, macro can always be tried. #define isdigit (c) c >= '0' && c <= '9' Two – Table Upper version use two comparison and one logical operation, but we can do better with more space: # define isdigit(c) table[c] This works and faster, but somewhat wasteful. We need only one bit to represent true or false, but we use a int. So what to do? There are many similar functions like isalpha(), isupper ... in c header file, so we can combine them into one int and get result by table[c]&SOME_BIT , which is what source do. Source code of ctype.h : # define _ISbit(bit) (1 << (