Tony was called into the leader’s office. After the basic functionality is fulfilled, Tony has started to review the quality attributes of the project.
Extensibility
“The syncer you have completed works great and it has good extensibility for its relative loose restriction between different modules (input, filter, output). Now, we need one more output choice: to output to MySQL server. How long do you think to finish this feature?”
“Em, at least three days, for both coding and testing.”
“Fine, go for it.”
The process to extend one more output choice is relative simple:
- Add 
MySQLoutput configconfigpackage, which is used to mapping config item inyamlto class; - Add 
MySQLoutput channel, which is the abstraction remote destination; - Add 
SqlMapper, which convert theSyncDatato sql statement whenSyncDatareach theMySQLoutput channel; 
The first item is implemented very fast but the second is slow. In output channel, Tony have to introduce the connection pool management from Tomcat. In addition to adding batch buffer functionality, Tony also need to add the retry logic to handle some exception cases, in which channel fail to send the data to remote MySQL.
The third task is kind simple sql generation task. Tony adopted the simple template string and replace method, which can only handle simple insert/update/delete and one level nested query. Because the mapping process is managed by only one class and is already able to handle the requirement of business, Tony marked a TODO and continued with other works.
After some tests, the new output channel is running in production environment.
Consistency
Aim
- Updated asynchronously - The user’s normal DB request and search request should be delayed as little as possible.
 - Eventually consistent - While it can lag behind slightly, serving stale results indefinitely isn’t an option.
 - Easy to rebuild - Updates can be lost before reaching Elasticsearch, and Elasticsearch itself is known to lose data under network partitions.
 
“The extensibility of syncer is not bad, but the data consistency promise is not so good in current implementations” Tony thought, "Back to see the aims of original discussion, the second aim is not promised now. If the syncer is down, we may lose some datas because syncer now update write ahead log when it received the binlog event. That means if the data is handling by syncer or fail to send to destinations, the log is already updated to new position, in which case syncer is down at that time and restart, the data was in syncer is lost.
“And the third aim is also not well handled. Now, because the rolling of binlog, we can’t fully rebuild through binlog but through sql query, which ask for even higher data consistency requirement of syncer. We run logstash's jdbc plugin to query all the data from table and insert into Elasticsearch when we need to rebuild data. This is very bad because I have to maintain two ways to sync.”
Ack Log
"The first improvement that I need to do is to change the write ahead log to ack log, which move binlog position one step further only if we send the data to destination successfully rather than when we receive the data from master.
"In the past, we only to remember the latest binlog position only, now we need to do in three phases:
- First, we remember what we received now in a memory data structure;
 - After we send it to destination successfully, we remove the corresponding item from previous data structure;
 - Periodically, we flush the smallest position of 
binloginto disk; 
“In this solution, we always have the smallest position stored in disk and when we restart, we can ensure that we will not miss any data.”
“But in this case, we may send one data more than once, in other words, we can ensure more than once semantic but not exactly once.”
Failure Log
“The next part to solve is how to handle failed SyncData. For the time being, if we fail to send some data to destination, we will retry a few times and discard it if max retry exceeded. With the help of Ack Log, we will not lose this data item, but we will not proceed any more. When next time we restart, we will still start at that failed item, repeat sending already sent items.”
"So we need to remove it from Ack Log. But in order to make sure it will not lose, we need to write it to another log, what about Failure Log. In this way, we write failed data into Failure Log and continue to proceed to new item.
"What if is the network problem, i.e. all the following item is failed? Maybe we should monitor the Failure Log's size or failed item count/speed, if it exceeds some limit, we should just abort and let people engage.
“Then, when restart, we can perform auto recover from Failure log. So, the Failure Log should be some form easy to write and parse. Java serialization suits the need, but protocol buffer seems having better performance and interoperability. So we can use protocol buffer.” Tony finished  design process and start to implement it now.
PS: When the post is written, the Ack Log is already implemented but the Failure Log is now under development.
Written with StackEdit.
评论
发表评论