09-08
29
发一个Lucene 2.4.0对搜索结果高亮显示的代码
作者:Java伴侣 日期:2009-08-29
使用高亮显示时,需要单独引入lucene-highlighter-2.4.0.jar。这个jar包在目录 lucene-2.4.0\contrib\highlighter中,把它复制到项目的bin文件夹,并在Java Build Path中添加其引用。可以参考下面几个网页:
http://www.javaeye.com/wiki/topic/73588
http://hi.baidu.com/lotusxyhf/blog/item/cc06f634558516b4d0a2d329.html
导入的package有:
1import org.apache.lucene.search.highlight.Highlighter;
2import org.apache.lucene.search.highlight.QueryScorer;
3import org.apache.lucene.search.highlight.SimpleFragmenter;
4import org.apache.lucene.search.highlight.SimpleHTMLFormatter;
使用Highlighter的搜索代码:
顺便附加一个getBestFragment函数的API:
分词可以使用 JE-Analyzer 1.5.1或者其他的中文分词工具,可以从网上下载这个jar包。基本的使用方法和StandardAnalyzer类似。
http://www.javaeye.com/wiki/topic/73588
http://hi.baidu.com/lotusxyhf/blog/item/cc06f634558516b4d0a2d329.html
导入的package有:
1import org.apache.lucene.search.highlight.Highlighter;
2import org.apache.lucene.search.highlight.QueryScorer;
3import org.apache.lucene.search.highlight.SimpleFragmenter;
4import org.apache.lucene.search.highlight.SimpleHTMLFormatter;
使用Highlighter的搜索代码:
复制内容到剪贴板 程序代码
1 /** *//**
2 * search method
3 */
4 private void search(){
5 try {
6 IndexSearcher searcher = new IndexSearcher(path);
7 //Query query= wildcardQuery();
8 //Query query = phraseQuery();;
9 //Query query = booleanQuery();
10 Query query = queryParser();//获取查询内容
11 TopDocCollector collector = new TopDocCollector(10);
12 searcher.search(query,collector);
13 ScoreDoc[] hits = collector.topDocs().scoreDocs;
14
15 //高亮显示设置
16 Highlighter highlighter = null;
17 SimpleHTMLFormatter simpleHTMLFormatter = new SimpleHTMLFormatter("<read>","</read>");
18 highlighter = new Highlighter(simpleHTMLFormatter,new QueryScorer(query));
19 highlighter.setTextFragmenter(new SimpleFragmenter(200));//这个100是指定关键字字符串的context的长度,你可以自己设定,因为不可能返回整篇正文内容
20
21 Document doc;
22 for(int i=0;i<hits.length;i++){
23
24 System.out.println(hits[i].doc);
25 System.out.println(hits[i].score);
26 //打印文档的内容
27 doc = searcher.doc(hits[i].doc);
28 System.out.println(doc.toString());
29 //高亮出显示
30 TokenStream tokenStream =new MMAnalyzer().tokenStream("token", new StringReader(doc.get("content")));
31 System.out.println(highlighter.getBestFragment(tokenStream,doc.get("content")));
32 //没查API,我的理解是:先把content域的文本包装成Reader类型,调用MMAnalyzer的tokenStrea方法进行分词;分词结果是具有offset信息的.然后用highlighter对象的getBestFragment方法,把符合query条件的分词在文本中高亮标注
33 }
34
35 } catch (Exception e) {
36 // TODO Auto-generated catch block
37 e.printStackTrace();
38 }
39 }
2 * search method
3 */
4 private void search(){
5 try {
6 IndexSearcher searcher = new IndexSearcher(path);
7 //Query query= wildcardQuery();
8 //Query query = phraseQuery();;
9 //Query query = booleanQuery();
10 Query query = queryParser();//获取查询内容
11 TopDocCollector collector = new TopDocCollector(10);
12 searcher.search(query,collector);
13 ScoreDoc[] hits = collector.topDocs().scoreDocs;
14
15 //高亮显示设置
16 Highlighter highlighter = null;
17 SimpleHTMLFormatter simpleHTMLFormatter = new SimpleHTMLFormatter("<read>","</read>");
18 highlighter = new Highlighter(simpleHTMLFormatter,new QueryScorer(query));
19 highlighter.setTextFragmenter(new SimpleFragmenter(200));//这个100是指定关键字字符串的context的长度,你可以自己设定,因为不可能返回整篇正文内容
20
21 Document doc;
22 for(int i=0;i<hits.length;i++){
23
24 System.out.println(hits[i].doc);
25 System.out.println(hits[i].score);
26 //打印文档的内容
27 doc = searcher.doc(hits[i].doc);
28 System.out.println(doc.toString());
29 //高亮出显示
30 TokenStream tokenStream =new MMAnalyzer().tokenStream("token", new StringReader(doc.get("content")));
31 System.out.println(highlighter.getBestFragment(tokenStream,doc.get("content")));
32 //没查API,我的理解是:先把content域的文本包装成Reader类型,调用MMAnalyzer的tokenStrea方法进行分词;分词结果是具有offset信息的.然后用highlighter对象的getBestFragment方法,把符合query条件的分词在文本中高亮标注
33 }
34
35 } catch (Exception e) {
36 // TODO Auto-generated catch block
37 e.printStackTrace();
38 }
39 }
顺便附加一个getBestFragment函数的API:
引用内容
getBestFragmentpublic final String getBestFragment(TokenStream tokenStream,
String text)
throws IOException
Highlights chosen terms in a text, extracting the most relevant section. The document text is analysed in chunks to record hit statistics across the document. After accumulating stats, the fragment with the highest score is returned
Parameters:
tokenStream - a stream of tokens identified in the text parameter, including offset information. This is typically produced by an analyzer re-parsing a document's text. Some work may be done on retrieving TokenStreams more efficently by adding support for storing original text position data in the Lucene index but this support is not currently available (as of Lucene 1.4 rc2).
text - text to highlight terms in
Returns:
highlighted text fragment or null if no terms found
Throws:
IOException
String text)
throws IOException
Highlights chosen terms in a text, extracting the most relevant section. The document text is analysed in chunks to record hit statistics across the document. After accumulating stats, the fragment with the highest score is returned
Parameters:
tokenStream - a stream of tokens identified in the text parameter, including offset information. This is typically produced by an analyzer re-parsing a document's text. Some work may be done on retrieving TokenStreams more efficently by adding support for storing original text position data in the Lucene index but this support is not currently available (as of Lucene 1.4 rc2).
text - text to highlight terms in
Returns:
highlighted text fragment or null if no terms found
Throws:
IOException
分词可以使用 JE-Analyzer 1.5.1或者其他的中文分词工具,可以从网上下载这个jar包。基本的使用方法和StandardAnalyzer类似。
评论: 0 | 引用: 0 | 查看次数: 308
发表评论