跳至主要内容

Elasticsearch Adventure(1): Trial and Error

Elasticsearch Adventure(1): Trial and Error

Tony is an energetic graduate from University who is admitted by a software company. He is interested in many new technology and recently, he is responsible for developing a simple search engine for test.

He skim the basic tutorials and books online (ES Introduction, Elasticsearch Concepts Confusion, etc ) and start developing his search engine.

With the development of searching system, hwe came across more problems of Elasticsearch usage. Fortunately, he has a mentor, Tim, who has rich experience of ES.

Query String

Tony: “Mentor, our search engine has the requirement of special search techniques, like xx AND yy for intersection of result, aa OR bb for union. Does ES support the interception of similar content?”

Tim: “Yes, ES support this features. It has query called query string query. Let’s see an example”

GET /_search
{
    "query": {
        "query_string" : {
            "fields" : ["content", "name"], # search against both
            "query" : "a AND b" # both have to appear
        }
    }
}

“Query string support many syntax, from commonly used booperator, ranges, groups, to advanced wildcards, regex fuzziness etc. You may like to refer the document for more details.”

Not Exist Value

Tony: “When we index our data structure into Elasticsearch, we sometimes not have all of the data fields of this type. So, we may have null values for some fields. TWhen, when we searching, we need to [deal withing null values otherwise we will encounter errors. So, mentor, do you have any solutions?”

Tim: “I believe that you can fix the similar problem in SQL world.”

Tony: “Maybe, I can do operations select non null field in SQL like following”

SELECT tags
FROM   posts
WHERE  tags IS NOT NULL and xxx

Tim: “Good job, Elasticsearch also has similar syntax called exists/missing query. And corresponding Elasticsearch query may looks like this”:

GET /my_index/posts/_search
{
    "query" : {
        "constant_score" : { # just filter, use constant score
            "filter" : {
                "exists" : { "field" : "tags" } # is tags exists?
            }
        }
    }
}

Tony: “But, as far as I understand,It seems that what thise Elasticsearch query is just checking is whether this field exists or not but not checking value?”

Tim: “Yes, you are very keen about the syntax. Hvalue, however, we should understand if a value is not exists, we can’t store it in the inverted index1, which store our value for searching document. So, in Elasticsearch, we can’t tell the differences between the we don’t have this field and we don’t have this value.”

Tony: “Ok, I understand that. But sometimes, we may need to distinguish the situation that we set it to null and we don’t set it at all. How to achieve it?”

Tim: “Good question. ES also met this question, so it provides us a placeholder for null when we setting our mappings. What we should notice are matching te data type and not setting a real value that could appears.”

Tony: “Wonderful! Thanks for your help.”

Tony: “Mentor, I came across another tricky problem now. When dealing with Chinese, the analyzer become more important and complex, because the standard built-in analyzer2 of Elasticsearch can only split Chinese in one by one char, like this”

data 你好北京
analyzed 你 好 北 京

Tim: “So what’s going on?”

Tony: “After some search I chose the Elasticsearch official document recorded Chinese analyzer: smartcn, which gives better result”

data 你好北京
analyzed 你好 北京

Tim: “Yes, It looks much better.”

Tony: “But, today, when testing the functionality of our search engine, we met some problems.”.

data 你好北京 你好1212

search 你好
result 你好1212

search 你好北京
result 你好北京

Tim: “em, it’s not working. Did you find what’s wrong?”

Tony: “Yes, I found it is because the analyzer smartcn's tokenizer”

data 你好北京 
analyzed 你好 北京 

data 你好1212
analyzed 你 好 1212 

data 你好
analyzed 你 好

“As we can see, It provides different tokenized results for same word, which causes the problem.”

Tim: “In this situation, I suggest you to try more different analyzers and compare the results with more sentence to choose a better one. Here is some analyzers that I know.”

  • analysis-icu
    • tokenizer: icu_tokenizer
  • analysis-ik
    • analyzer: ik_smart
    • analyzer: ik_max_word

The next day, Tony come to Tim again.

Tony: “Mentor, I finished the installation and test, you may like to see the result.”

data IPO企业清障“三类股东”还需监管层发力

smartcn IPO 企业 清 障 “三类 股东” 还 需 监管 层 发 力
icu IPO 企业 清 障 “三 类 股东” 还 需 监管 层 发 力
ik_smart IPO 企业 清障 “三类 股东” 还需 监管 层 发 力
ik_max_word: this analyzer will count all valid combinations of Chinese char in sentence, not suitable for our use case

“From above result, we can see ik_smart works better, with only one error. And even better, ik can add dictionary very easily.”

Tim: “Fine, we can choose ik_smart for our analyzer.”

Ref

Written with StackEdit.


  1. Inverted index is a data structure ES used to assist quick search more here ↩︎

  2. The analyzer is composed by character filer, tokenizer, token filer. Details can be found here. ↩︎

评论

此博客中的热门博文

Spring Boot: Customize Environment

Spring Boot: Customize Environment Environment variable is a very commonly used feature in daily programming: used in init script used in startup configuration used by logging etc In Spring Boot, all environment variables are a part of properties in Spring context and managed by Environment abstraction. Because Spring Boot can handle the parse of configuration files, when we want to implement a project which uses yml file as a separate config file, we choose the Spring Boot. The following is the problems we met when we implementing the parse of yml file and it is recorded for future reader. Bind to Class Property values can be injected directly into your beans using the @Value annotation, accessed via Spring’s Environment abstraction or bound to structured objects via @ConfigurationProperties. As the document says, there exists three ways to access properties in *.properties or *.yml : @Value : access single value Environment : can access multi...

LevelDB Source Reading (4): Concurrent Access

In this thread, we come to the issue of concurrent access of LevelDB. As a database, it can be concurrently accessed by users. But, it wouldn’t be easy to provide high throughput under product load. What effort does LevelDB make to achieve this goal both in design and implementation? Goal of Design From this github issue , we can see LevelDB is designed for not allowing multi-process access. this (supporting multiple processes) doesn’t seem like a good feature for LevelDB to implement. They believe let multiple process running would be impossible to share memory/buffer/cache, which may affect the performance of LevelDB. In the case of multiple read-only readers without altering the code base, you could simply copy the file for each reader. Yes, it will be inefficient (though not on file systems that dedupe data), but then again, so would having multiple leveldb processes running as they wouldn’t be able to share their memory/buffer/etc. They achieve it by adding a l...

LevelDB Source Reading (3): Compaction

LevelDB Source Reading (3): Compaction In the last blog that analyzes read/write process of Leveldb, we can see writing only happens to log file and memory table, then it relies on the compaction process to move the new updates into persistent sorted table for future use. So the compaction is a crucial part for the design, and we will dive into it in this blog. Compaction LevelDB compacts its underlying storage data in the background to improve read performance. The upper sentence is cited from the document of Leveldb , and we will see how it is implemented via code review. Background compaction // db_impl.cc void DBImpl :: MaybeScheduleCompaction ( ) { // use background thread to run compaction env_ - > Schedule ( & DBImpl :: BGWork , this ) ; } Two main aspects // arrange background compaction when Get, Open, Write void DBImpl :: BackgroundCompaction ( ) { // compact memtable CompactMemTable ( ) ; // compact ...