[FEATURE] Support for multiple field mappings in a single text-image embedding processor #476
Labels
enhancement
Features
Introduces a new unit of functionality that satisfies a requirement
neural-search
Is your feature request related to a problem?
Currently neural-search text_image_processor allows a single document field to be defined for each image and text mapping. A single field can be defined to store embedding in OpenSearch. Example of processor definition:
What solution would you like?
It should be possible to define multiple field pairs for image, text or image+text. It should be possible to define an OpenSearch field that stores embedding for a model. Request may look something like:
What alternatives have you considered?
Today it's possible to define multiple embedding processors as part of a single pipeline, and each processor may have it's own definition of mapping and embedding field.
Do you have any additional context?
The text was updated successfully, but these errors were encountered: