用于solr5的ansj分词插件扩展

  • Post author:
  • Post category:其他

源码:

https://github.com/NLPchina/ansj_seg

jar包:

http://maven.nlpcn.org/org/ansj/

http://maven.nlpcn.org/org/nlpcn/nlp-lang

http://maven.nlpcn.org/org/ansj/tree_split/

生成solr5的ansj插件:

下载ansj_seg最新源码,在ansj_seg的lucene5的插件项目(ansj_seg/plugin/ansj_lucene5_plug)中做扩展,

添加类org.ansj.solr5.AnsjTokenizerFactory,编译后生成一个新的ansj_lucene5_plug-3.x.x.jar.

package org.ansj.solr5;
 
import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;


import org.ansj.lucene.util.AnsjTokenizer;
import org.ansj.splitWord.analysis.IndexAnalysis;
import org.ansj.splitWord.analysis.ToAnalysis;
import org.apache.lucene.analysis.Tokenizer;
import org.apache.lucene.analysis.util.TokenizerFactory;
import org.apache.lucene.util.AttributeFactory;
 
public class AnsjTokenizerFactory extends TokenizerFactory{


    boolean pstemming;
    boolean isQuery;
    private String stopwordsDir;
    public Set<String> filter; 
 
    public AnsjTokenizerFactory(Map<String, String> args) {
        super(args);
        getLuceneMatchVersion();
        isQuery = getBoolean(args, "isQuery", true);
        pstemming = getBoolean(args, "pstemming", false);
        stopwordsDir = get(args,"stopwords");
        addStopwords(stopwordsDir);
    }
    
    //add stopwords list to filter
    private void addStopwords(String dir) {
        if (dir == null){
            System.out.println("no stopwords dir");
            return;
        }
        //read stoplist
        System.out.println("stopwords: " + dir);
        filter = new HashSet<String>();
        File file = new File(dir);
        InputStreamReader reader;
        try {
            reader = new InputStreamReader(new FileInputStream(file),"UTF-8");
            BufferedReader br = new BufferedReader(reader);
            String word = br.readLine(); 
            while (word != null) {
                filter.add(word);
                word = br.readLine();
            } 
            br.close();
        } catch (FileNotFoundException e) {
            System.out.println("No stopword file found");
        } catch (IOException e) {
            System.out.println("stopword file io exception");
        }     
    }
    
    @Override
    public Tokenizer create(AttributeFactory factory) {
        if(isQuery == true){
            //query
            return new AnsjTokenizer(new ToAnalysis(), filter);
        } else {
            //index
            return new AnsjTokenizer(new IndexAnalysis(), filter);
        }
    }     
}

这里使用solr5自带的jetty进行部署,将分词插件及依赖的jar包放到/opt/solr/server/solr-webapp/webapp/WEB-INF/lib目录下:

ansj_lucene5_plug-3.7.3.jar

ansj_seg-3.7.3.jar

nlp-lang-1.5.jar

分词配置文件(library.properties)放到/opt/solr/server/resources目录下。

在schema中配置扩展分词的fieldType:

    <fieldType name=”text_ansj” class=”solr.TextField” positionIncrementGap=”100″>

      <analyzer type=”index”>

         <tokenizer class=”org.ansj.solr5.AnsjTokenizerFactory”  isQuery=”false” stopwords=”/path/to/stopwords.dic”/>

      </analyzer>

      <analyzer type=”query”>

        <tokenizer class=”org.ansj.solr5.AnsjTokenizerFactory” stopwords=”/path/to/stopwords.dic”/>

      </analyzer>

    </fieldType>


版权声明:本文为lzx1104原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。