-
Notifications
You must be signed in to change notification settings - Fork 174
feat: Add GoogleAITextEmbedder and GoogleAIDocumentEmbedder components #1783
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
21 commits
Select commit
Hold shift + click to select a range
85a7ef2
feat: Add GoogleAITextEmbedder and GoogleAIDocumentEmbedder components
garybadwal b9f94c7
fix: Improve error messages for input type validation in GoogleAIText…
garybadwal 15fd085
Merge branch 'deepset-ai:main' into main
garybadwal 682a4e2
feat: add Google GenAI embedder components for document and text embe…
garybadwal 778f702
feat: add unit tests for GoogleAIDocumentEmbedder and GoogleAITextEmb…
garybadwal 3de6d1e
refactor: clean up imports and improve list handling in GoogleAIDocum…
garybadwal 7fcfb14
Merge branch 'main' into main
garybadwal c85dadb
Merge branch 'main' into main
garybadwal 9c6cb1a
refactor: Rename classes and update imports for Google GenAI components
garybadwal 89bb3be
feat: Add additional modules for Google GenAI embedders in config
garybadwal ea0c0c9
Merge branch 'main' into main
garybadwal 814f53c
Merge branch 'main' into main
garybadwal f2d2a0c
chore: add 'more-itertools' to lint environment dependencies
garybadwal 171ab37
refactor: update GoogleGenAIDocumentEmbedder and GoogleGenAITextEmbed…
garybadwal f20bdff
refactor: update _prepare_texts_to_embed to return a list instead of …
garybadwal 38525c0
refactor: format code for better readability and consistency in docum…
garybadwal f8e5f8a
refactor: improve code formatting for consistency and readability in …
garybadwal 666d0d5
refactor: update _prepare_texts_to_embed to return a list instead of …
garybadwal 0b8d687
feat: add new author to project metadata in pyproject.toml
garybadwal c323a34
Merge branch 'main' into main
garybadwal 94b31ae
Merge branch 'main' into main
garybadwal File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
7 changes: 7 additions & 0 deletions
7
...ions/google_genai/src/haystack_integrations/components/embedders/google_genai/__init__.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
# SPDX-FileCopyrightText: 2023-present deepset GmbH <info@deepset.ai> | ||
# | ||
# SPDX-License-Identifier: Apache-2.0 | ||
from .document_embedder import GoogleGenAIDocumentEmbedder | ||
from .text_embedder import GoogleGenAITextEmbedder | ||
|
||
__all__ = ["GoogleGenAIDocumentEmbedder", "GoogleGenAITextEmbedder"] |
193 changes: 193 additions & 0 deletions
193
...le_genai/src/haystack_integrations/components/embedders/google_genai/document_embedder.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,193 @@ | ||
# SPDX-FileCopyrightText: 2022-present deepset GmbH <info@deepset.ai> | ||
# | ||
# SPDX-License-Identifier: Apache-2.0 | ||
|
||
from typing import Any, Dict, List, Optional, Tuple, Union | ||
|
||
from google import genai | ||
from google.genai import types | ||
from haystack import Document, component, default_from_dict, default_to_dict, logging | ||
from haystack.utils import Secret, deserialize_secrets_inplace | ||
from more_itertools import batched | ||
from tqdm import tqdm | ||
|
||
logger = logging.getLogger(__name__) | ||
|
||
|
||
@component | ||
class GoogleGenAIDocumentEmbedder: | ||
""" | ||
Computes document embeddings using Google AI models. | ||
|
||
### Usage example | ||
|
||
```python | ||
from haystack import Document | ||
from haystack_integrations.components.embedders import GoogleGenAIDocumentEmbedder | ||
|
||
doc = Document(content="I love pizza!") | ||
|
||
document_embedder = GoogleGenAIDocumentEmbedder() | ||
|
||
result = document_embedder.run([doc]) | ||
print(result['documents'][0].embedding) | ||
|
||
# [0.017020374536514282, -0.023255806416273117, ...] | ||
``` | ||
""" | ||
|
||
def __init__( | ||
self, | ||
*, | ||
api_key: Secret = Secret.from_env_var("GOOGLE_API_KEY"), | ||
model: str = "text-embedding-004", | ||
prefix: str = "", | ||
suffix: str = "", | ||
batch_size: int = 32, | ||
progress_bar: bool = True, | ||
meta_fields_to_embed: Optional[List[str]] = None, | ||
embedding_separator: str = "\n", | ||
config: Optional[Dict[str, Any]] = None, | ||
): | ||
""" | ||
Creates an GoogleGenAIDocumentEmbedder component. | ||
|
||
:param api_key: | ||
The Google API key. | ||
You can set it with the environment variable `GOOGLE_API_KEY`, or pass it via this parameter | ||
during initialization. | ||
:param model: | ||
The name of the model to use for calculating embeddings. | ||
The default model is `text-embedding-ada-002`. | ||
:param prefix: | ||
A string to add at the beginning of each text. | ||
:param suffix: | ||
A string to add at the end of each text. | ||
:param batch_size: | ||
Number of documents to embed at once. | ||
:param progress_bar: | ||
If `True`, shows a progress bar when running. | ||
:param meta_fields_to_embed: | ||
List of metadata fields to embed along with the document text. | ||
:param embedding_separator: | ||
Separator used to concatenate the metadata fields to the document text. | ||
:param config: | ||
A dictionary of keyword arguments to configure embedding content configuration `types.EmbedContentConfig`. | ||
If not specified, it defaults to {"task_type": "SEMANTIC_SIMILARITY"}. | ||
For more information, see the [Google AI Task types](https://ai.google.dev/gemini-api/docs/embeddings#task-types). | ||
""" | ||
self._api_key = api_key | ||
self._model = model | ||
self._prefix = prefix | ||
self._suffix = suffix | ||
self._batch_size = batch_size | ||
self._progress_bar = progress_bar | ||
self._meta_fields_to_embed = meta_fields_to_embed or [] | ||
self._embedding_separator = embedding_separator | ||
self._client = genai.Client(api_key=api_key.resolve_value()) | ||
self._config = config if config is not None else {"task_type": "SEMANTIC_SIMILARITY"} | ||
|
||
def to_dict(self) -> Dict[str, Any]: | ||
""" | ||
Serializes the component to a dictionary. | ||
|
||
:returns: | ||
Dictionary with serialized data. | ||
""" | ||
return default_to_dict( | ||
self, | ||
model=self._model, | ||
prefix=self._prefix, | ||
suffix=self._suffix, | ||
batch_size=self._batch_size, | ||
progress_bar=self._progress_bar, | ||
meta_fields_to_embed=self._meta_fields_to_embed, | ||
embedding_separator=self._embedding_separator, | ||
api_key=self._api_key.to_dict(), | ||
config=self._config, | ||
) | ||
|
||
@classmethod | ||
def from_dict(cls, data: Dict[str, Any]) -> "GoogleGenAIDocumentEmbedder": | ||
""" | ||
Deserializes the component from a dictionary. | ||
|
||
:param data: | ||
Dictionary to deserialize from. | ||
:returns: | ||
Deserialized component. | ||
""" | ||
deserialize_secrets_inplace(data["init_parameters"], keys=["api_key"]) | ||
return default_from_dict(cls, data) | ||
|
||
def _prepare_texts_to_embed(self, documents: List[Document]) -> List[str]: | ||
""" | ||
Prepare the texts to embed by concatenating the Document text with the metadata fields to embed. | ||
""" | ||
texts_to_embed: List[str] = [] | ||
for doc in documents: | ||
meta_values_to_embed = [ | ||
str(doc.meta[key]) | ||
for key in self._meta_fields_to_embed | ||
if key in doc.meta and doc.meta[key] is not None | ||
] | ||
|
||
text_to_embed = ( | ||
self._prefix + self._embedding_separator.join([*meta_values_to_embed, doc.content or ""]) + self._suffix | ||
) | ||
texts_to_embed.append(text_to_embed) | ||
|
||
return texts_to_embed | ||
|
||
def _embed_batch(self, texts_to_embed: List[str], batch_size: int) -> Tuple[List[List[float]], Dict[str, Any]]: | ||
""" | ||
Embed a list of texts in batches. | ||
""" | ||
|
||
all_embeddings = [] | ||
meta: Dict[str, Any] = {} | ||
for batch in tqdm( | ||
batched(texts_to_embed, batch_size), disable=not self._progress_bar, desc="Calculating embeddings" | ||
): | ||
args: Dict[str, Any] = {"model": self._model, "contents": [b[1] for b in batch]} | ||
if self._config: | ||
args["config"] = types.EmbedContentConfig(**self._config) if self._config else None | ||
|
||
response = self._client.models.embed_content(**args) | ||
|
||
embeddings = [el.values for el in response.embeddings] | ||
all_embeddings.extend(embeddings) | ||
|
||
if "model" not in meta: | ||
meta["model"] = self._model | ||
|
||
return all_embeddings, meta | ||
|
||
@component.output_types(documents=List[Document], meta=Dict[str, Any]) | ||
def run(self, documents: List[Document]) -> Dict[str, Union[List[Document], Dict[str, Any]]]: | ||
""" | ||
Embeds a list of documents. | ||
|
||
:param documents: | ||
A list of documents to embed. | ||
|
||
:returns: | ||
A dictionary with the following keys: | ||
- `documents`: A list of documents with embeddings. | ||
- `meta`: Information about the usage of the model. | ||
""" | ||
if not isinstance(documents, list) or (documents and not isinstance(documents[0], Document)): | ||
error_message_documents = ( | ||
"GoogleGenAIDocumentEmbedder expects a list of Documents as input. " | ||
"In case you want to embed a string, please use the GoogleGenAITextEmbedder." | ||
) | ||
raise TypeError(error_message_documents) | ||
|
||
texts_to_embed = self._prepare_texts_to_embed(documents=documents) | ||
|
||
embeddings, meta = self._embed_batch(texts_to_embed=texts_to_embed, batch_size=self._batch_size) | ||
|
||
for doc, emb in zip(documents, embeddings): | ||
doc.embedding = emb | ||
|
||
return {"documents": documents, "meta": meta} |
139 changes: 139 additions & 0 deletions
139
...google_genai/src/haystack_integrations/components/embedders/google_genai/text_embedder.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,139 @@ | ||
# SPDX-FileCopyrightText: 2022-present deepset GmbH <info@deepset.ai> | ||
# | ||
# SPDX-License-Identifier: Apache-2.0 | ||
|
||
from typing import Any, Dict, List, Optional, Union | ||
|
||
from google import genai | ||
from google.genai import types | ||
from haystack import component, default_from_dict, default_to_dict, logging | ||
from haystack.utils import Secret, deserialize_secrets_inplace | ||
|
||
logger = logging.getLogger(__name__) | ||
|
||
|
||
@component | ||
class GoogleGenAITextEmbedder: | ||
""" | ||
Embeds strings using Google AI models. | ||
|
||
You can use it to embed user query and send it to an embedding Retriever. | ||
|
||
### Usage example | ||
|
||
```python | ||
from haystack_integrations.components.embedders.google_genai import GoogleGenAITextEmbedder | ||
|
||
text_to_embed = "I love pizza!" | ||
|
||
text_embedder = GoogleGenAITextEmbedder() | ||
|
||
print(text_embedder.run(text_to_embed)) | ||
|
||
# {'embedding': [0.017020374536514282, -0.023255806416273117, ...], | ||
# 'meta': {'model': 'text-embedding-004-v2', | ||
# 'usage': {'prompt_tokens': 4, 'total_tokens': 4}}} | ||
``` | ||
""" | ||
|
||
def __init__( | ||
self, | ||
*, | ||
api_key: Secret = Secret.from_env_var("GOOGLE_API_KEY"), | ||
model: str = "text-embedding-004", | ||
prefix: str = "", | ||
suffix: str = "", | ||
config: Optional[Dict[str, Any]] = None, | ||
): | ||
""" | ||
Creates an GoogleGenAITextEmbedder component. | ||
|
||
:param api_key: | ||
The Google API key. | ||
You can set it with the environment variable `GOOGLE_API_KEY`, or pass it via this parameter | ||
during initialization. | ||
:param model: | ||
The name of the model to use for calculating embeddings. | ||
The default model is `text-embedding-004`. | ||
sjrl marked this conversation as resolved.
Show resolved
Hide resolved
|
||
:param prefix: | ||
A string to add at the beginning of each text to embed. | ||
:param suffix: | ||
A string to add at the end of each text to embed. | ||
:param config: | ||
A dictionary of keyword arguments to configure embedding content configuration `types.EmbedContentConfig`. | ||
If not specified, it defaults to {"task_type": "SEMANTIC_SIMILARITY"}. | ||
For more information, see the [Google AI Task types](https://ai.google.dev/gemini-api/docs/embeddings#task-types). | ||
""" | ||
|
||
self._api_key = api_key | ||
self._model_name = model | ||
self._prefix = prefix | ||
self._suffix = suffix | ||
self._config = config if config is not None else {"task_type": "SEMANTIC_SIMILARITY"} | ||
self._client = genai.Client(api_key=api_key.resolve_value()) | ||
|
||
def to_dict(self) -> Dict[str, Any]: | ||
""" | ||
Serializes the component to a dictionary. | ||
|
||
:returns: | ||
Dictionary with serialized data. | ||
""" | ||
return default_to_dict( | ||
self, | ||
model=self._model_name, | ||
api_key=self._api_key.to_dict(), | ||
prefix=self._prefix, | ||
suffix=self._suffix, | ||
config=self._config, | ||
) | ||
|
||
@classmethod | ||
def from_dict(cls, data: Dict[str, Any]) -> "GoogleGenAITextEmbedder": | ||
""" | ||
Deserializes the component from a dictionary. | ||
|
||
:param data: | ||
Dictionary to deserialize from. | ||
:returns: | ||
Deserialized component. | ||
""" | ||
deserialize_secrets_inplace(data["init_parameters"], keys=["api_key"]) | ||
return default_from_dict(cls, data) | ||
|
||
def _prepare_input(self, text: str) -> Dict[str, Any]: | ||
if not isinstance(text, str): | ||
error_message_text = ( | ||
"GoogleGenAITextEmbedder expects a string as an input. " | ||
"In case you want to embed a list of Documents, please use the GoogleAIDocumentEmbedder." | ||
) | ||
|
||
raise TypeError(error_message_text) | ||
|
||
text_to_embed = self._prefix + text + self._suffix | ||
|
||
kwargs: Dict[str, Any] = {"model": self._model_name, "contents": text_to_embed} | ||
if self._config: | ||
kwargs["config"] = types.EmbedContentConfig(**self._config) | ||
|
||
return kwargs | ||
|
||
def _prepare_output(self, result: types.EmbedContentResponse) -> Dict[str, Any]: | ||
return {"embedding": result.embeddings[0].values, "meta": {"model": self._model_name}} | ||
sjrl marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
@component.output_types(embedding=List[float], meta=Dict[str, Any]) | ||
def run(self, text: str) -> Union[Dict[str, List[float]], Dict[str, Any]]: | ||
""" | ||
Embeds a single string. | ||
|
||
:param text: | ||
Text to embed. | ||
|
||
:returns: | ||
A dictionary with the following keys: | ||
- `embedding`: The embedding of the input text. | ||
- `meta`: Information about the usage of the model. | ||
""" | ||
create_kwargs = self._prepare_input(text=text) | ||
response = self._client.models.embed_content(**create_kwargs) | ||
return self._prepare_output(result=response) |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.