跳至主要内容

The Life of a Request to Google (2)

The Life of a Request to Google (2)

The life of a request

In the last blog post, we have introduced what a request may experiences when traveling from user’s browser to the data center. Now, the request finally reach the Application Frontend and front end is trying to communicate with back end.

Load Balance

Third Trial: RPC

In order to communicate with back end, front end can use REST way or RPC way. REST way is based on HTTP protocol; RPC way is based on TCP & UDP. Considering the efficiency of communication, google go the RPC way.

Along with RPC, they also adopted protocol buffer, which serialize data in binary which is faster and smaller than textual data which is used by HTTP protocol. In order to avoid the slow 3 way handshake when establishing connection and the slow start features, they make long-lived connection to backend servers. When a front end is quiet long enough, the TCP connection is closed to save cost and periodic UDP datagram is sent to do health check.

But in a data center, there may be exists more than hundreds or thousands of backend servers, it is impossible to hold connections to all backends. So we need to choose a subset of backends to connect. The first task is to choose a reasonable number for it. There exists no right answer for all system, it should be considered according to the number of front end and back end, traffic of data center, load of machine etc.

When the number is decided, we can use an algorithm to allocate the backends. The selection algorithm should assign backends uniformly but not fixed to handle machine failure and restart. A simple random selection method would fail because the variance is too large in practice:

def Subset(backends, client_id, subset_size):
  ratio =  subset_size / len(backends)
  return [i for i in backends if random.random() < ratio]

The algorithm provided by google is like following:

def Subset(backends, client_id, subset_size):
  subset_count = len(backends) / subset_size
  # Group clients into rounds; each round uses the same shuffled list:
  round = client_id / subset_count
  random.seed(round)
  random.shuffle(backends)
  # The subset id corresponding to the current client:
  subset_id = client_id % subset_count
  start = subset_id * subset_size
  return backends[start:start + subset_size]

Load Balance Algo

Now, we already kept the connection to some health backends, the next step is to send request with some strategy. The policy to do load balance can be very simple like Round Robin or a little bit complex, like Least-Loaded Round Robin, or combined with backends information Weighted Round Robin.

Round Robin

A simple round robin algorithm will send request to backends in set one by one, which is very simple, but has bad load balance result. The example in book showed that this may result in 2 times gap in CPU load from the least to the most loaded backends. It may comes from the following reason:

  • Too mall subsets and different loaded client;
  • Unbalanced cost to handle different requests;
  • Differences between machine;
Least Loaded Round Robin

This algo require the client to remember the number of active requests it has to each backends, and use Round Robin among the set of tasks with minimal number of active requests. In practice, they found the service using Least-Loaded Round Robin "will see their most loaded backend using twice as much CPU as the least loaded, performing about as poorly as Round Robin.

Weighted Round Robin

The core reason that the above algorithm fail to do a good job on load balance is the client don’t know the state of backends (or choose a bad criterial to estimate as Least Loaded Round Robin does).

The Weighted algo make load balance decision based on the info provided by backends. It will keep a score for backends and update the score according to the response backends give. Backends will include the rate of queries and errors per second, in addition to the system load – usually, CPU usage. And this algorithm indeed get very good results in production as book introduced.


Finally, the request reach the backends, the next step to DB is relatively similar with previous step, as now Backends is the client and DB is “back end”, so the techniques also works. And this is the end of story.

Written with StackEdit.

评论

此博客中的热门博文

Spring Boot: Customize Environment

Spring Boot: Customize Environment Environment variable is a very commonly used feature in daily programming: used in init script used in startup configuration used by logging etc In Spring Boot, all environment variables are a part of properties in Spring context and managed by Environment abstraction. Because Spring Boot can handle the parse of configuration files, when we want to implement a project which uses yml file as a separate config file, we choose the Spring Boot. The following is the problems we met when we implementing the parse of yml file and it is recorded for future reader. Bind to Class Property values can be injected directly into your beans using the @Value annotation, accessed via Spring’s Environment abstraction or bound to structured objects via @ConfigurationProperties. As the document says, there exists three ways to access properties in *.properties or *.yml : @Value : access single value Environment : can access multi

Elasticsearch: Join and SubQuery

Elasticsearch: Join and SubQuery Tony was bothered by the recent change of search engine requirement: they want the functionality of SQL-like join in Elasticsearch! “They are crazy! How can they think like that. Didn’t they understand that Elasticsearch is kind-of NoSQL 1 in which every index should be independent and self-contained? In this way, every index can work independently and scale as they like without considering other indexes, so the performance can boost. Following this design principle, Elasticsearch has little related supports.” Tony thought, after listening their requirements. Leader notice tony’s unwillingness and said, “Maybe it is hard to do, but the requirement is reasonable. We need to search person by his friends, didn’t we? What’s more, the harder to implement, the more you can learn from it, right?” Tony thought leader’s word does make sense so he set out to do the related implementations Application-Side Join “The first implementation

Implement isdigit

It is seems very easy to implement c library function isdigit , but for a library code, performance is very important. So we will try to implement it and make it faster. Function So, first we make it right. int isdigit ( char c) { return c >= '0' && c <= '9' ; } Improvements One – Macro When it comes to performance for c code, macro can always be tried. #define isdigit (c) c >= '0' && c <= '9' Two – Table Upper version use two comparison and one logical operation, but we can do better with more space: # define isdigit(c) table[c] This works and faster, but somewhat wasteful. We need only one bit to represent true or false, but we use a int. So what to do? There are many similar functions like isalpha(), isupper ... in c header file, so we can combine them into one int and get result by table[c]&SOME_BIT , which is what source do. Source code of ctype.h : # define _ISbit(bit) (1 << (