跳至主要内容

Elasticsearch MySQL Sync Challenge (5): Redesign

Elasticsearch MySQL Sync Challenge (5): Redesign

Multiple Syncer?

“You may have already heard the new requirement: listening for the data change of main DB and send the data to auth system. What is your idea?” Leader said.

"Em, the simple way is of course to write a similar config files as we did before like following:

input:
    masters:
      - connection:
          address: 192.168.1.1
          port: 3306

filter:
	...

output:
	mysql:
	  - connection: 
		  address: auth-system    

----
input:
    masters:
      - connection:
          address: 192.168.1.1
          port: 3306

filter:
	...

output:
	elsticsearch:
		connection: search-system        

“Then we can start two separate syncer to finish the job of data synchronization for both search system and auth system.” Tony finished.

“Yes, it works. But seems not so good.”

“Em, yep. Two syncer connect to same data source in which they receive the same data. If we need to send 10 destinations, we will receive same data 10 times, which is obviously a big waste, both in connection resource and data stream. And it will also increase the burden of MySQL master a lot.” Tony answered.

“Great, that is what I am worried. So for now, you first task is to come up with solutions to solve this problem.”


Redesign

"The first thing to make clear is that the two sync jobs can’t be merged into one. They have same data source but different process and filter process.

"so In order to use the same connection, we have to convert to merge multiple data sources into one, and create multiple consumers to retrieve the event as they need. So the architecture of syncer should be two main parts: producer and consumer – which is much like a message queue.

Same Process?

“Should the producer and consumer in same process or not?” Tony met the first design choice.

"If they are in same process, producer and consumer can communicate via memory which will be much efficient than most IPC ways if they stay in different processes. But if they are in separate process, we can implement the dynamic registration/de-registration if we need more consumers in production.

"Considering that we have performance requirement when sync data to auth system but no dynamic consumer requirement, use same process is preferred. By the way, implement in same process is a little bit more easy to implement. So first choice is to make producer and consumer running in same process.

“But in order to avoid too much change in future, I might better to make the interaction between the producer and consumer through interfaces, and now just implement via method call, may be, in the future, add implementation of RPC etc.” Tony concluded.

Interfaces

"Now, what the interfaces between consumer and producer needed to defined in interfaces? In the new design, the consumer first register itself in producer, then producer started to sync data from master then send it to corresponding consumers.

"So, the first interface is a ConsumerRegistry, which consumer register itself with the info showing the data they are interested in.

“Apparently, we need a interface to send data back – OutputSink, which can be used in producer to output event; InputSource, which can be used in consumer to receive event.” Tony then add more methods and arguments in interfaces.

Synced Consumer

"But because using the same data source, the bottleneck can be the producer. In other word, all consumers are synced with producer even they have different speed of handling received event, in which case, the slow consumer’s event will be piled up. In order to avoid this situation, we may need to tune the threads every consumer can have.

“They other problems is all consumer have to start at the same position. If they want the data from different position, producer have no choice but to use the smallest position to start sync data. This may cause the waste of computation to handle the duplicate event. But more important effect is it makes the sync job must be idempotent, i.e. can handle more-than once sync event.” Tony write the drawbacks to remind what merged producer may costs.

Mis

“There are still many things to think about: the consumer should identify itself with id; what the info is needed to register consumer besides the interested schema/table/column; how to dispatch sync event when producer receive it, and how to make sure the data is received by consumer, i.e. the ack process…” Considering now he can’t come up with all the problems to handle, Tony decided to draw the new architecture. Then he start to implement it and solve other mis challenge when met it:

enter image description here

Written with StackEdit.

评论

此博客中的热门博文

Spring Boot: Customize Environment

Spring Boot: Customize Environment Environment variable is a very commonly used feature in daily programming: used in init script used in startup configuration used by logging etc In Spring Boot, all environment variables are a part of properties in Spring context and managed by Environment abstraction. Because Spring Boot can handle the parse of configuration files, when we want to implement a project which uses yml file as a separate config file, we choose the Spring Boot. The following is the problems we met when we implementing the parse of yml file and it is recorded for future reader. Bind to Class Property values can be injected directly into your beans using the @Value annotation, accessed via Spring’s Environment abstraction or bound to structured objects via @ConfigurationProperties. As the document says, there exists three ways to access properties in *.properties or *.yml : @Value : access single value Environment : can access multi

Elasticsearch: Join and SubQuery

Elasticsearch: Join and SubQuery Tony was bothered by the recent change of search engine requirement: they want the functionality of SQL-like join in Elasticsearch! “They are crazy! How can they think like that. Didn’t they understand that Elasticsearch is kind-of NoSQL 1 in which every index should be independent and self-contained? In this way, every index can work independently and scale as they like without considering other indexes, so the performance can boost. Following this design principle, Elasticsearch has little related supports.” Tony thought, after listening their requirements. Leader notice tony’s unwillingness and said, “Maybe it is hard to do, but the requirement is reasonable. We need to search person by his friends, didn’t we? What’s more, the harder to implement, the more you can learn from it, right?” Tony thought leader’s word does make sense so he set out to do the related implementations Application-Side Join “The first implementation

Implement isdigit

It is seems very easy to implement c library function isdigit , but for a library code, performance is very important. So we will try to implement it and make it faster. Function So, first we make it right. int isdigit ( char c) { return c >= '0' && c <= '9' ; } Improvements One – Macro When it comes to performance for c code, macro can always be tried. #define isdigit (c) c >= '0' && c <= '9' Two – Table Upper version use two comparison and one logical operation, but we can do better with more space: # define isdigit(c) table[c] This works and faster, but somewhat wasteful. We need only one bit to represent true or false, but we use a int. So what to do? There are many similar functions like isalpha(), isupper ... in c header file, so we can combine them into one int and get result by table[c]&SOME_BIT , which is what source do. Source code of ctype.h : # define _ISbit(bit) (1 << (