You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: text_2_sql/README.md
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -162,7 +162,7 @@ A full data dictionary must be built for all the views / tables you which to exp
162
162
163
163
This method is called by the Semantic Kernel framework automatically, when instructed to do so by the LLM, to run a SQL query against the given database. It returns a JSON string containing a row wise dump of the results returned. These results are then interpreted to answer the question.
164
164
165
-
## Prompt Based SQL Plugin
165
+
## Prompt Based SQL Plugin (Iteration 2)
166
166
167
167
This approach works well for a small number of entities (test on up to 20 entities with hundreds of columns). It performed well on the testing, with correct metadata, we achieved 100% accuracy on the test set.
168
168
@@ -184,7 +184,7 @@ The **target_engine** is passed to the prompt, along with **engine_specific_rule
184
184
185
185
This method is called by the Semantic Kernel framework automatically, when instructed to do so by the LLM, to fetch the full schema definitions for a given entity. This returns a JSON string of the chosen entity which allows the LLM to understand the column definitions and their associated metadata. This can be called in parallel for multiple entities.
186
186
187
-
## Vector Based SQL Plugin
187
+
## Vector Based SQL Plugin (Iterations 3 & 4)
188
188
189
189
This approach allows the system to scale without significantly increasing the number of tokens used within the system prompt. Indexing and running an AI Search instance consumes additional cost, compared to the prompt based approach.
190
190
@@ -212,7 +212,7 @@ The search text passed is vectorised against the entity level **Description** co
212
212
213
213
#### run_ai_search_query()
214
214
215
-
The vector based with query cache notebook uses the `run_ai_search_query()` method to fetch the most relevant previous query and injects it into the prompt. The use of Auto-Function Calling here is avoided to reduce the response time as the cache index will always be used first.
215
+
The vector based with query cache notebook uses the `run_ai_search_query()` method to fetch the most relevant previous query and injects it into the prompt before the initial LLM call. The use of Auto-Function Calling here is avoided to reduce the response time as the cache index will always be used first.
0 commit comments