Skip to content

Make tokenizer on input fields when creating ES indices from s3 buckets #215

@doogyb

Description

@doogyb

In order for an DQ aggregation to work correctly (entitiesCountOverTokenCountByConfidence) we need to create the ElasticSearch index with a mapping/processor that adds a tokenizer to the input field.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions