In the last blog that analyzes read/write process of Leveldb, we can see writing only happens to log file and memory table, then it relies on the compaction process to move the new updates into persistent sorted table for future use. So the compaction is a crucial part for the design, and we will dive into it in this blog.
Compaction
LevelDB compacts its underlying storage data in the background to improve read performance.
The upper sentence is cited from the document of Leveldb
, and we will see how it is implemented via code review.
Background compaction
// db_impl.cc
void DBImpl::MaybeScheduleCompaction() {
// use background thread to run compaction
env_->Schedule(&DBImpl::BGWork, this);
}
Two main aspects
// arrange background compaction when Get, Open, Write
void DBImpl::BackgroundCompaction() {
// compact memtable
CompactMemTable();
// compact sorted table & log file
// table_builder --> block_bulder
status = DoCompactionWork(compact);
}
Compact Memtable
// DBImpl::CompactMemTable
// convert memtable to persistent sorted table
Status s = WriteLevel0Table(imm_, &edit, base);
// apply edit to current version
s = versions_->LogAndApply(&edit, &mutex_);
Compact Sorted Table
// DBImpl::DoCompactionWork
// (A) get iterator that represent two level's sorted table file
// which contains specific range of data to merge
Iterator* input = versions_->MakeInputIterator(compact->compaction);
// -- iteration start --
// (B)test if this key should be dropped by
// - key type (kTypeDeletion) --> drop
// - sequence number <= smallest_snapshot --> drop
// (C)add key and value to table_builder if shouldn't drop
if (!drop) {
// Open output file if necessary
if (compact->builder->NumEntries() == 0) { // empty builder
compact->current_output()->smallest.DecodeFrom(key);
}
compact->current_output()->largest.DecodeFrom(key);
compact->builder->Add(key, input->value());
// Close output file if it is big enough
}
// -- iteration end --
// (D)after iteration, install change to
// - log: clear related log entry
// - versionSet: add new file in new versionSet
// DBImpl::InstallCompactionResults
for (size_t i = 0; i < compact->outputs.size(); i++) {
const CompactionState::Output& out = compact->outputs[i];
compact->compaction->edit()->AddFile(
level + 1,
out.number, out.file_size, out.smallest, out.largest);
}
return versions_->LogAndApply(compact->compaction->edit(), &mutex_);
Two sources: manual or auto
if (is_manual) {
ManualCompaction* m = manual_compaction_;
c = versions_->CompactRange(m->level, m->begin, m->end);
m->done = (c == NULL);
if (c != NULL) {
manual_end = c->input(0, c->num_input_files(0) - 1)->largest;
}
} else {
c = versions_->PickCompaction();
}
When auto compaction will be triggered?
// version_set.h
// Returns true iff some level needs a compaction.
bool NeedsCompaction() const {
Version* v = current_;
return (v->compaction_score_ >= 1) || (v->file_to_compact_ != NULL);
}
And when score
and file
will be ready? It has following two scenarios:
- Update score when a compaction is in the final phase –
VersionSet::LogAndApply
: to prepare for the next compaction
// VersionSet::Finalize
if (level == 0) {
// We treat level-0 specially by bounding the number of files
// instead of number of bytes for two reasons:
//
// (1) With larger write-buffer sizes, it is nice not to do too
// many level-0 compactions.
//
// (2) The files in level-0 are merged on every read and
// therefore we wish to avoid too many files when the individual
// file size is small (perhaps because of a small write-buffer
// setting, or very high compression ratios, or lots of
// overwrites/deletions).
score = v->files_[level].size() /
static_cast<double>(config::kL0_CompactionTrigger);
} else {
// Compute the ratio of current size to size limit.
const uint64_t level_bytes = TotalFileSize(v->files_[level]);
score =
static_cast<double>(level_bytes) / MaxBytesForLevel(options_, level);
}
- Update file to compact when read –
DBImpl::Get
// file will be compacted if it is sought too many times
// Adds "stats" into the current state. Returns true if a new
// compaction may need to be triggered, false otherwise.
// bool UpdateStats(const GetStats& stats);
FileMetaData* f = stats.seek_file;
if (f != NULL) {
f->allowed_seeks--;
if (f->allowed_seeks <= 0 && file_to_compact_ == NULL) {
file_to_compact_ = f;
file_to_compact_level_ = stats.seek_file_level;
return true;
}
}
So, in conclusion, when there is enough read/write on database, an auto compaction will be triggered.
Why Compaction
LevelDB store each SSTable
in a “level” and compact them whenever there are too many (size or number) SSTable
in a level, promoting to the next level. In the process of compaction for sorted table file, we drop some old value and deletion marker which saves the time to search and storage.
Because the design of LevelDB and many other NoSQL system, which is composed by Log + MemTable + SSTable, they are immutable. So, compaction become an essential feature for update functionality (Compaction using background thread and sequential IO, which make the delay very small). More than that, it also makes multiple version worked, so it is very common in the NoSQL system.
Written with StackEdit.
评论
发表评论