跳至主要内容

Elasticsearch MySQL Sync Challenge (4): Quality Attributes

Elasticsearch MySQL Sync Challenge (4): Quality Attributes

Tony was called into the leader’s office. After the basic functionality is fulfilled, Tony has started to review the quality attributes of the project.

Extensibility

“The syncer you have completed works great and it has good extensibility for its relative loose restriction between different modules (input, filter, output). Now, we need one more output choice: to output to MySQL server. How long do you think to finish this feature?”

“Em, at least three days, for both coding and testing.”

“Fine, go for it.”


The process to extend one more output choice is relative simple:

  • Add MySQL output config config package, which is used to mapping config item in yaml to class;
  • Add MySQL output channel, which is the abstraction remote destination;
  • Add SqlMapper, which convert the SyncData to sql statement when SyncData reach the MySQL output channel;

The first item is implemented very fast but the second is slow. In output channel, Tony have to introduce the connection pool management from Tomcat. In addition to adding batch buffer functionality, Tony also need to add the retry logic to handle some exception cases, in which channel fail to send the data to remote MySQL.

The third task is kind simple sql generation task. Tony adopted the simple template string and replace method, which can only handle simple insert/update/delete and one level nested query. Because the mapping process is managed by only one class and is already able to handle the requirement of business, Tony marked a TODO and continued with other works.

After some tests, the new output channel is running in production environment.

Consistency

Aim

  • Updated asynchronously - The user’s normal DB request and search request should be delayed as little as possible.
  • Eventually consistent - While it can lag behind slightly, serving stale results indefinitely isn’t an option.
  • Easy to rebuild - Updates can be lost before reaching Elasticsearch, and Elasticsearch itself is known to lose data under network partitions.

“The extensibility of syncer is not bad, but the data consistency promise is not so good in current implementations” Tony thought, "Back to see the aims of original discussion, the second aim is not promised now. If the syncer is down, we may lose some datas because syncer now update write ahead log when it received the binlog event. That means if the data is handling by syncer or fail to send to destinations, the log is already updated to new position, in which case syncer is down at that time and restart, the data was in syncer is lost.

“And the third aim is also not well handled. Now, because the rolling of binlog, we can’t fully rebuild through binlog but through sql query, which ask for even higher data consistency requirement of syncer. We run logstash's jdbc plugin to query all the data from table and insert into Elasticsearch when we need to rebuild data. This is very bad because I have to maintain two ways to sync.”

Ack Log

"The first improvement that I need to do is to change the write ahead log to ack log, which move binlog position one step further only if we send the data to destination successfully rather than when we receive the data from master.

"In the past, we only to remember the latest binlog position only, now we need to do in three phases:

  • First, we remember what we received now in a memory data structure;
  • After we send it to destination successfully, we remove the corresponding item from previous data structure;
  • Periodically, we flush the smallest position of binlog into disk;

“In this solution, we always have the smallest position stored in disk and when we restart, we can ensure that we will not miss any data.”

“But in this case, we may send one data more than once, in other words, we can ensure more than once semantic but not exactly once.”

Failure Log

“The next part to solve is how to handle failed SyncData. For the time being, if we fail to send some data to destination, we will retry a few times and discard it if max retry exceeded. With the help of Ack Log, we will not lose this data item, but we will not proceed any more. When next time we restart, we will still start at that failed item, repeat sending already sent items.”

"So we need to remove it from Ack Log. But in order to make sure it will not lose, we need to write it to another log, what about Failure Log. In this way, we write failed data into Failure Log and continue to proceed to new item.

"What if is the network problem, i.e. all the following item is failed? Maybe we should monitor the Failure Log's size or failed item count/speed, if it exceeds some limit, we should just abort and let people engage.

“Then, when restart, we can perform auto recover from Failure log. So, the Failure Log should be some form easy to write and parse. Java serialization suits the need, but protocol buffer seems having better performance and interoperability. So we can use protocol buffer.” Tony finished design process and start to implement it now.


PS: When the post is written, the Ack Log is already implemented but the Failure Log is now under development.

Written with StackEdit.

评论

此博客中的热门博文

Spring Boot: Customize Environment

Spring Boot: Customize Environment Environment variable is a very commonly used feature in daily programming: used in init script used in startup configuration used by logging etc In Spring Boot, all environment variables are a part of properties in Spring context and managed by Environment abstraction. Because Spring Boot can handle the parse of configuration files, when we want to implement a project which uses yml file as a separate config file, we choose the Spring Boot. The following is the problems we met when we implementing the parse of yml file and it is recorded for future reader. Bind to Class Property values can be injected directly into your beans using the @Value annotation, accessed via Spring’s Environment abstraction or bound to structured objects via @ConfigurationProperties. As the document says, there exists three ways to access properties in *.properties or *.yml : @Value : access single value Environment : can access multi...

Elasticsearch: Join and SubQuery

Elasticsearch: Join and SubQuery Tony was bothered by the recent change of search engine requirement: they want the functionality of SQL-like join in Elasticsearch! “They are crazy! How can they think like that. Didn’t they understand that Elasticsearch is kind-of NoSQL 1 in which every index should be independent and self-contained? In this way, every index can work independently and scale as they like without considering other indexes, so the performance can boost. Following this design principle, Elasticsearch has little related supports.” Tony thought, after listening their requirements. Leader notice tony’s unwillingness and said, “Maybe it is hard to do, but the requirement is reasonable. We need to search person by his friends, didn’t we? What’s more, the harder to implement, the more you can learn from it, right?” Tony thought leader’s word does make sense so he set out to do the related implementations Application-Side Join “The first implementation ...

Learn Spring Expression Language

When reading the source code of some Spring based projects, we can see some code like following: @Value( "${env}" ) private int value ; and like following: @Autowired public void configure (MovieFinder movieFinder, @ Value ("#{ systemProperties[ 'user.region' ] } ") String defaultLocale) { this.movieFinder = movieFinder; this.defaultLocale = defaultLocale; } In this way, we can inject values from different sources very conveniently, and this is the features of Spring EL. What is Spring EL? How to use this handy feature to assist our developments? Today, we are going to learn some basics of Spring EL. Features The full name of Spring EL is Spring Expression Language, which exists in form of Java string and evaluated by Spring. It supports many syntax, from simple property access to complex safe navigation – method invocation when object is not null. And the following is the feature list from Spring EL document : ...