-
Notifications
You must be signed in to change notification settings - Fork 1.9k
feat: search quality eval #4720
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR Summary
This PR introduces a comprehensive search quality evaluation framework with scripts and configuration files to assess and tune search parameters by comparing search results against reranked results.
- Added
run_search_eval.py
implementing metrics like Jaccard similarity and rank changes to evaluate search quality against reranker output - Added
generate_search_queries.py
to ensure consistent query modification across test runs using LLM and search tool interface - Implemented score-adjusted evaluation metrics in
run_search_eval.py
to handle varying relevance thresholds - Added detailed configuration templates (
search_eval_config.yaml.template
) for customizing search and evaluation parameters - Enhanced error handling in
/backend/onyx/context/search/utils.py
for stop word removal and logging
💡 (2/5) Greptile learns from your feedback when you react with 👍/👎!
6 file(s) reviewed, 10 comment(s)
Edit PR Review Bot Settings | Greptile
backend/tests/regression/search_quality/generate_search_queries.py
Outdated
Show resolved
Hide resolved
backend/tests/regression/search_quality/generate_search_queries.py
Outdated
Show resolved
Hide resolved
backend/tests/regression/search_quality/search_eval_config.yaml.template
Outdated
Show resolved
Hide resolved
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few nits, LGTM!
backend/tests/regression/search_quality/generate_search_queries.py
Outdated
Show resolved
Hide resolved
* fix: import order * test examples * fix: import * wip: reranker based eval * fix: import order * feat: adjuted score * fix: mypy * fix: suggestions * sorry cvs, you must go Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> * fix: mypy * fix: suggestions --------- Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
* fix: import order * test examples * fix: import * wip: reranker based eval * fix: import order * feat: adjuted score * fix: mypy * fix: suggestions * sorry cvs, you must go Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> * fix: mypy * fix: suggestions --------- Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
* fix: import order * test examples * fix: import * wip: reranker based eval * fix: import order * feat: adjuted score * fix: mypy * fix: suggestions * sorry cvs, you must go Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> * fix: mypy * fix: suggestions --------- Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
* fix: import order * test examples * fix: import * wip: reranker based eval * fix: import order * feat: adjuted score * fix: mypy * fix: suggestions * sorry cvs, you must go Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> * fix: mypy * fix: suggestions --------- Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Description
What does it do?
run_search_eval.py runs a bunch of queries locally and compares the results from the search and reranker
it evaluates the search quality based on how closely it aligns with the reranker (assuming the reranker works well)
it is mostly used as a tool for quickly testing and tuning search parameters such as hybrid alpha, decay, etc. Can also be used to test other factors that affect searching, such as the prompt, embedding model, quantization, etc.
unlike answer_quality/run_qa, it doesn't need a ground truth label (enables quick and easy testing, and testing of queries without clear "ground truth" orderings)
It also makes sure the query doesn't switch around every time (since normally, the query is modified before going into the search pipeline) to enable fair comparisons
generate_search_queries is a helper tool to convert queries and save them, so the evaluation script can reuse the same modified queries
How Has This Been Tested?
Backporting (check the box to trigger backport action)
Note: You have to check that the action passes, otherwise resolve the conflicts manually and tag the patches.