Class CustomAnalyzer
java.lang.Object
org.apache.lucene.analysis.Analyzer
org.apache.lucene.analysis.custom.CustomAnalyzer
- All Implemented Interfaces:
Closeable,AutoCloseable
A general-purpose Analyzer that can be created with a builder-style API. Under the hood it uses
the factory classes
TokenizerFactory, TokenFilterFactory, and CharFilterFactory.
You can create an instance of this Analyzer using the builder by passing the SPI names (as
defined by ServiceLoader interface) to it:
Analyzer ana = CustomAnalyzer.builder(Paths.get("/path/to/config/dir"))
.withTokenizer(StandardTokenizerFactory.NAME)
.addTokenFilter(LowerCaseFilterFactory.NAME)
.addTokenFilter(StopFilterFactory.NAME, "ignoreCase", "false", "words", "stopwords.txt", "format", "wordset")
.build();
The parameters passed to components are also used by Apache Solr and are documented on their
corresponding factory classes. Refer to documentation of subclasses of TokenizerFactory,
TokenFilterFactory, and CharFilterFactory.
This is the same as the above:
Analyzer ana = CustomAnalyzer.builder(Paths.get("/path/to/config/dir"))
.withTokenizer("standard")
.addTokenFilter("lowercase")
.addTokenFilter("stop", "ignoreCase", "false", "words", "stopwords.txt", "format", "wordset")
.build();
The list of names to be used for components can be looked up through: TokenizerFactory.availableTokenizers(), TokenFilterFactory.availableTokenFilters(), and
CharFilterFactory.availableCharFilters().
You can create conditional branches in the analyzer by using CustomAnalyzer.Builder.when(String, String...) and CustomAnalyzer.Builder.whenTerm(Predicate):
Analyzer ana = CustomAnalyzer.builder()
.withTokenizer("standard")
.addTokenFilter("lowercase")
.whenTerm(t -> t.length() > 10)
.addTokenFilter("reversestring")
.endwhen()
.build();
- Since:
- 5.0.0
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionstatic final classBuilder forCustomAnalyzer.static classFactory class for aConditionalTokenFilterNested classes/interfaces inherited from class org.apache.lucene.analysis.Analyzer
Analyzer.ReuseStrategy, Analyzer.TokenStreamComponents -
Field Summary
FieldsModifier and TypeFieldDescriptionprivate final CharFilterFactory[]private final Integerprivate final Integerprivate final TokenFilterFactory[]private final TokenizerFactoryFields inherited from class org.apache.lucene.analysis.Analyzer
GLOBAL_REUSE_STRATEGY, PER_FIELD_REUSE_STRATEGY -
Constructor Summary
ConstructorsConstructorDescriptionCustomAnalyzer(CharFilterFactory[] charFilters, TokenizerFactory tokenizer, TokenFilterFactory[] tokenFilters, Integer posIncGap, Integer offsetGap) -
Method Summary
Modifier and TypeMethodDescriptionstatic CustomAnalyzer.Builderbuilder()Returns a builder for custom analyzers that loads all resources from Lucene's classloader.static CustomAnalyzer.BuilderReturns a builder for custom analyzers that loads all resources from the given file system base directory.static CustomAnalyzer.Builderbuilder(ResourceLoader loader) Returns a builder for custom analyzers that loads all resources using the givenResourceLoader.protected Analyzer.TokenStreamComponentscreateComponents(String fieldName) Creates a newAnalyzer.TokenStreamComponentsinstance for this analyzer.Returns the list of char filters that are used in this analyzer.intgetOffsetGap(String fieldName) Just likeAnalyzer.getPositionIncrementGap(java.lang.String), except for Token offsets instead.intgetPositionIncrementGap(String fieldName) Invoked before indexing a IndexableField instance if terms have already been added to that field.Returns the list of token filters that are used in this analyzer.Returns the tokenizer that is used in this analyzer.protected ReaderinitReader(String fieldName, Reader reader) Override this if you want to add a CharFilter chain.protected ReaderinitReaderForNormalization(String fieldName, Reader reader) Wrap the givenReaderwithCharFilters that make sense for normalization.protected TokenStreamnormalize(String fieldName, TokenStream in) Wrap the givenTokenStreamin order to apply normalization filters.toString()Methods inherited from class org.apache.lucene.analysis.Analyzer
attributeFactory, close, getReuseStrategy, normalize, tokenStream, tokenStream
-
Field Details
-
charFilters
-
tokenizer
-
tokenFilters
-
posIncGap
-
offsetGap
-
-
Constructor Details
-
CustomAnalyzer
CustomAnalyzer(CharFilterFactory[] charFilters, TokenizerFactory tokenizer, TokenFilterFactory[] tokenFilters, Integer posIncGap, Integer offsetGap)
-
-
Method Details
-
builder
Returns a builder for custom analyzers that loads all resources from Lucene's classloader. All path names given must be absolute with package prefixes. -
builder
Returns a builder for custom analyzers that loads all resources from the given file system base directory. Place, e.g., stop word files there. Files that are not in the given directory are loaded from Lucene's classloader. -
builder
Returns a builder for custom analyzers that loads all resources using the givenResourceLoader. -
initReader
Description copied from class:AnalyzerOverride this if you want to add a CharFilter chain.The default implementation returns
readerunchanged.- Overrides:
initReaderin classAnalyzer- Parameters:
fieldName- IndexableField name being indexedreader- original Reader- Returns:
- reader, optionally decorated with CharFilter(s)
-
initReaderForNormalization
Description copied from class:AnalyzerWrap the givenReaderwithCharFilters that make sense for normalization. This is typically a subset of theCharFilters that are applied inAnalyzer.initReader(String, Reader). This is used byAnalyzer.normalize(String, String).- Overrides:
initReaderForNormalizationin classAnalyzer
-
createComponents
Description copied from class:AnalyzerCreates a newAnalyzer.TokenStreamComponentsinstance for this analyzer.- Specified by:
createComponentsin classAnalyzer- Parameters:
fieldName- the name of the fields content passed to theAnalyzer.TokenStreamComponentssink as a reader- Returns:
- the
Analyzer.TokenStreamComponentsfor this analyzer.
-
normalize
Description copied from class:AnalyzerWrap the givenTokenStreamin order to apply normalization filters. The default implementation returns theTokenStreamas-is. This is used byAnalyzer.normalize(String, String). -
getPositionIncrementGap
Description copied from class:AnalyzerInvoked before indexing a IndexableField instance if terms have already been added to that field. This allows custom analyzers to place an automatic position increment gap between IndexbleField instances using the same field name. The default value position increment gap is 0. With a 0 position increment gap and the typical default token position increment of 1, all terms in a field, including across IndexableField instances, are in successive positions, allowing exact PhraseQuery matches, for instance, across IndexableField instance boundaries.- Overrides:
getPositionIncrementGapin classAnalyzer- Parameters:
fieldName- IndexableField name being indexed.- Returns:
- position increment gap, added to the next token emitted from
Analyzer.tokenStream(String,Reader). This value must be>= 0.
-
getOffsetGap
Description copied from class:AnalyzerJust likeAnalyzer.getPositionIncrementGap(java.lang.String), except for Token offsets instead. By default this returns 1. This method is only called if the field produced at least one token for indexing.- Overrides:
getOffsetGapin classAnalyzer- Parameters:
fieldName- the field just indexed- Returns:
- offset gap, added to the next token emitted from
Analyzer.tokenStream(String,Reader). This value must be>= 0.
-
getCharFilterFactories
Returns the list of char filters that are used in this analyzer. -
getTokenizerFactory
Returns the tokenizer that is used in this analyzer. -
getTokenFilterFactories
Returns the list of token filters that are used in this analyzer. -
toString
-