Query Language Model
1 TFIDF
在一份给定的文件里,词频(term frequency,TF)指的是某一个给定的词语在该文件中出现的频率。这个数字是对词数(term count)的归一化,以防止它偏向长的文件。(同一个词语在长文件里可能会比短文件有更高的词数,而不管该词语重要与否。)对于在某一特定文件里的词语 来说,它的重要性可表示为:
以上式子中 是该词 在文件中的出现次数,而分母则是在文件中所有字词的出现次数之和。
逆向文件频率(inverse document frequency,IDF)是一个词语普遍重要性的度量。某一特定词语的IDF,可以由总文件数目除以包含该词语之文件的数目,再将得到的商取对数得到:
其中
- |D|:语料库中的文件总数
- :包含词语的文件数目(即的文件数目)如果该词语不在语料库中,就会导致被除数为零,因此一般情况下使用
然后
2 BM25
考虑的是tf, qtf,和文档长度Given a query Q, containing keywords , the BM25 score of a document D is:
where is 's term frequency in the document D, is the length of the document D in words, and avgdl is the average document length in the text collection from which documents are drawn. and b are free parameters, usually chosen, in absence of an advanced optimization, as and .[1] is the IDF (inverse document frequency) weight of the query term . It is usually computed as:
where N is the total number of documents in the collection, and is the number of documents containing .
3 Query likelihood
Rank documents by the probability that the query could be generated by the document model (i.e. same topic)
Given query, start with P(D|Q)
Using Bayes’ Rule
Assuming prior is uniform, unigram model
【Jelinek-Mercer Smoothing】
C_q_i:q_i在语料中出现的次数;|C|:语料中总词数(不是词汇数,相同的词可算多次)
【Dirichlet Smoothing】