跳至主要内容

LevelDB Source Reading (3): Compaction

LevelDB Source Reading (3): Compaction

In the last blog that analyzes read/write process of Leveldb, we can see writing only happens to log file and memory table, then it relies on the compaction process to move the new updates into persistent sorted table for future use. So the compaction is a crucial part for the design, and we will dive into it in this blog.

Compaction

LevelDB compacts its underlying storage data in the background to improve read performance.

The upper sentence is cited from the document of Leveldb, and we will see how it is implemented via code review.

Background compaction

// db_impl.cc

void DBImpl::MaybeScheduleCompaction() {
    // use background thread to run compaction
    env_->Schedule(&DBImpl::BGWork, this);
}

Two main aspects

// arrange background compaction when Get, Open, Write
void DBImpl::BackgroundCompaction() {
    // compact memtable
    CompactMemTable(); 
    // compact sorted table & log file
    // table_builder --> block_bulder
    status = DoCompactionWork(compact);
}
Compact Memtable
// DBImpl::CompactMemTable
// convert memtable to persistent sorted table
Status s = WriteLevel0Table(imm_, &edit, base);
// apply edit to current version
s = versions_->LogAndApply(&edit, &mutex_);
Compact Sorted Table
// DBImpl::DoCompactionWork

// (A) get iterator that represent two level's sorted table file
// which contains specific range of data to merge
Iterator* input = versions_->MakeInputIterator(compact->compaction);

// -- iteration start --
// (B)test if this key should be dropped by 
// - key type (kTypeDeletion) --> drop
// - sequence number <= smallest_snapshot --> drop

// (C)add key and value to table_builder if shouldn't drop
if (!drop) { 
  // Open output file if necessary
  if (compact->builder->NumEntries() == 0) { // empty builder
    compact->current_output()->smallest.DecodeFrom(key);
  }
  compact->current_output()->largest.DecodeFrom(key);
  compact->builder->Add(key, input->value());

  // Close output file if it is big enough
}
// -- iteration end --

// (D)after iteration, install change to 
// - log: clear related log entry
// - versionSet: add new file in new versionSet
// DBImpl::InstallCompactionResults
for (size_t i = 0; i < compact->outputs.size(); i++) {
  const CompactionState::Output& out = compact->outputs[i];
  compact->compaction->edit()->AddFile(
      level + 1,
      out.number, out.file_size, out.smallest, out.largest);
}
return versions_->LogAndApply(compact->compaction->edit(), &mutex_);

Two sources: manual or auto

if (is_manual) { 
  ManualCompaction* m = manual_compaction_;
  c = versions_->CompactRange(m->level, m->begin, m->end);
  m->done = (c == NULL);
  if (c != NULL) {
    manual_end = c->input(0, c->num_input_files(0) - 1)->largest;
  }
} else {
  c = versions_->PickCompaction();
}

When auto compaction will be triggered?

// version_set.h
// Returns true iff some level needs a compaction.
bool NeedsCompaction() const {
  Version* v = current_;
  return (v->compaction_score_ >= 1) || (v->file_to_compact_ != NULL);
}

And when score and file will be ready? It has following two scenarios:

  • Update score when a compaction is in the final phase – VersionSet::LogAndApply: to prepare for the next compaction
// VersionSet::Finalize
if (level == 0) {
  // We treat level-0 specially by bounding the number of files
  // instead of number of bytes for two reasons:
  //
  // (1) With larger write-buffer sizes, it is nice not to do too
  // many level-0 compactions.
  //
  // (2) The files in level-0 are merged on every read and
  // therefore we wish to avoid too many files when the individual
  // file size is small (perhaps because of a small write-buffer
  // setting, or very high compression ratios, or lots of
  // overwrites/deletions).
  score = v->files_[level].size() /
      static_cast<double>(config::kL0_CompactionTrigger);
} else {
  // Compute the ratio of current size to size limit.
  const uint64_t level_bytes = TotalFileSize(v->files_[level]);
  score =
      static_cast<double>(level_bytes) / MaxBytesForLevel(options_, level);
}
  • Update file to compact when read – DBImpl::Get
// file will be compacted if it is sought too many times
// Adds "stats" into the current state. Returns true if a new
// compaction may need to be triggered, false otherwise.
// bool UpdateStats(const GetStats& stats);
FileMetaData* f = stats.seek_file;
if (f != NULL) {
  f->allowed_seeks--;
  if (f->allowed_seeks <= 0 && file_to_compact_ == NULL) {
    file_to_compact_ = f;
    file_to_compact_level_ = stats.seek_file_level;
    return true;
  }
}

So, in conclusion, when there is enough read/write on database, an auto compaction will be triggered.

Why Compaction

LevelDB store each SSTable in a “level” and compact them whenever there are too many (size or number) SSTable in a level, promoting to the next level. In the process of compaction for sorted table file, we drop some old value and deletion marker which saves the time to search and storage.

Because the design of LevelDB and many other NoSQL system, which is composed by Log + MemTable + SSTable, they are immutable. So, compaction become an essential feature for update functionality (Compaction using background thread and sequential IO, which make the delay very small). More than that, it also makes multiple version worked, so it is very common in the NoSQL system.

Written with StackEdit.

评论

此博客中的热门博文

Spring Boot: Customize Environment

Spring Boot: Customize Environment Environment variable is a very commonly used feature in daily programming: used in init script used in startup configuration used by logging etc In Spring Boot, all environment variables are a part of properties in Spring context and managed by Environment abstraction. Because Spring Boot can handle the parse of configuration files, when we want to implement a project which uses yml file as a separate config file, we choose the Spring Boot. The following is the problems we met when we implementing the parse of yml file and it is recorded for future reader. Bind to Class Property values can be injected directly into your beans using the @Value annotation, accessed via Spring’s Environment abstraction or bound to structured objects via @ConfigurationProperties. As the document says, there exists three ways to access properties in *.properties or *.yml : @Value : access single value Environment : can access multi

Elasticsearch: Join and SubQuery

Elasticsearch: Join and SubQuery Tony was bothered by the recent change of search engine requirement: they want the functionality of SQL-like join in Elasticsearch! “They are crazy! How can they think like that. Didn’t they understand that Elasticsearch is kind-of NoSQL 1 in which every index should be independent and self-contained? In this way, every index can work independently and scale as they like without considering other indexes, so the performance can boost. Following this design principle, Elasticsearch has little related supports.” Tony thought, after listening their requirements. Leader notice tony’s unwillingness and said, “Maybe it is hard to do, but the requirement is reasonable. We need to search person by his friends, didn’t we? What’s more, the harder to implement, the more you can learn from it, right?” Tony thought leader’s word does make sense so he set out to do the related implementations Application-Side Join “The first implementation

Implement isdigit

It is seems very easy to implement c library function isdigit , but for a library code, performance is very important. So we will try to implement it and make it faster. Function So, first we make it right. int isdigit ( char c) { return c >= '0' && c <= '9' ; } Improvements One – Macro When it comes to performance for c code, macro can always be tried. #define isdigit (c) c >= '0' && c <= '9' Two – Table Upper version use two comparison and one logical operation, but we can do better with more space: # define isdigit(c) table[c] This works and faster, but somewhat wasteful. We need only one bit to represent true or false, but we use a int. So what to do? There are many similar functions like isalpha(), isupper ... in c header file, so we can combine them into one int and get result by table[c]&SOME_BIT , which is what source do. Source code of ctype.h : # define _ISbit(bit) (1 << (