Replies: 2 comments 6 replies
-
🤖 Hello @burberg92! Good to see you again. I hope your work with LlamaIndex is going well. Let's dive into your new issue. To achieve your goal of retrieving nodes based on a query and replacing them with the window, without having the query engine generate a response, you can follow these steps:
from llama_index.node_parser import SentenceWindowNodeParser
node_parser = SentenceWindowNodeParser.from_defaults(
window_size=3,
window_metadata_key="window",
original_text_metadata_key="original_text",
)
nodes = node_parser.get_nodes_from_documents(documents)
from llama_index import VectorStoreIndex
sentence_index = VectorStoreIndex(nodes, service_context=ctx)
from llama_index.postprocessor import MetadataReplacementPostProcessor
query_engine = sentence_index.as_query_engine(
similarity_top_k=2,
node_postprocessors=[
MetadataReplacementPostProcessor(target_metadata_key="window")
],
)
window_response = query_engine.query(
"What are the concerns surrounding the AMOC?"
)
window = window_response.source_nodes[0].node.metadata["window"]
sentence = window_response.source_nodes[0].node.metadata["original_text"]
print(f"Window: {window}")
print("------------------")
print(f"Original Sentence: {sentence}") This process does not generate a response from the LLM. It only retrieves the nodes based on the query and replaces them with the window. You can find more details in the MetadataReplacementDemo.ipynb notebook. The I hope this helps! If you have any other questions, feel free to ask. Best, Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
@burberg92 if you want to retrieve and apply a node postprocessor, you can do so (I think dosu has mislead you a bit)
If you need to run many at the same time, I would use multiprocessing. (sadly, pinecone does not have an async query method, otherwise I would suggest using |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
My goal is to retrieve nodes based on a query and then replace them with the window, as in this guide https://docs.llamaindex.ai/en/stable/examples/node_postprocessor/MetadataReplacementDemo.html. However I do not want the query engine to generate a response to the query. How can I achieve this?
Beta Was this translation helpful? Give feedback.
All reactions