You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AIService__AzureSearchOptions__Key=<AI search key if using non identity based connection>
14
+
AIService__AzureSearchOptions__Text2SqlSchemaStore__Index=<Schema store index name. Default is created as "text-2-sql-schema-store-index">
15
+
AIService__AzureSearchOptions__Text2SqlSchemaStore__SemanticConfig=<Schema store semantic config. Default is created as "text-2-sql-schema-store-semantic-config">
16
+
AIService__AzureSearchOptions__Text2SqlQueryCache__Index=<Query cache index name. Default is created as "text-2-sql-query-cache-index">
17
+
AIService__AzureSearchOptions__Text2SqlQueryCache__SemanticConfig=<Query cache semantic config. Default is created as "text-2-sql-query-cache-semantic-config">
18
+
AIService__AzureSearchOptions__Text2SqlColumnValueStore__Index=<Column value store index name. Default is created as "text-2-sql-column-value-store-index">
Copy file name to clipboardExpand all lines: text_2_sql/GETTING_STARTED.md
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ To get started, perform the following steps:
5
5
1. Setup Azure OpenAI in your subscription with **gpt-4o-mini** & an embedding model, alongside a SQL Server sample database, AI Search and a storage account.
6
6
2. Clone this repository and deploy the AI Search text2sql indexes from `deploy_ai_search`.
7
7
3. Run `uv sync` within the text_2_sql directory to install dependencies.
8
-
4.Configure the .env file based on the provided sample
9
-
5. Generate a data dictionary for your target server using the instructions in `data_dictionary`.
10
-
6. Upload these data dictionaries to the relevant contains in your storage account. Wait for them to be automatically indexed.
8
+
4.Create your `.env` file based on the provided sample`.env.example`. Place this file in the same place as the `.env.example`.
9
+
5. Generate a data dictionary for your target server using the instructions in the **Running** section of the `data_dictionary/README.md`.
10
+
6. Upload these data dictionaries to the relevant containers in your storage account. Wait for them to be automatically indexed with the included skillsets.
11
11
7. Navigate to `autogen` directory to view the AutoGen implementation. Follow the steps in `Iteration 5 - Agentic Vector Based Text2SQL.ipynb` to get started.
Copy file name to clipboardExpand all lines: text_2_sql/README.md
+17-17Lines changed: 17 additions & 17 deletions
Original file line number
Diff line number
Diff line change
@@ -54,7 +54,20 @@ As the query cache is shared between users (no data is stored in the cache), a n
54
54
55
55

56
56
57
-
#### Parallel execution
57
+
## Agents
58
+
59
+
This agentic system contains the following agents:
60
+
61
+
-**Query Cache Agent:** Responsible for checking the cache for previously asked questions.
62
+
-**Query Decomposition Agent:** Responsible for decomposing complex questions, into sub questions that can be answered with SQL.
63
+
-**Schema Selection Agent:** Responsible for extracting key terms from the question and checking the index store for the queries.
64
+
-**SQL Query Generation Agent:** Responsible for using the previously extracted schemas and generated SQL queries to answer the question. This agent can request more schemas if needed. This agent will run the query.
65
+
-**SQL Query Verification Agent:** Responsible for verifying that the SQL query and results question will answer the question.
66
+
-**Answer Generation Agent:** Responsible for taking the database results and generating the final answer for the user.
67
+
68
+
The combination of this agent allows the system to answer complex questions, whilst staying under the token limits when including the database schemas. The query cache ensures that previously asked questions, can be answered quickly to avoid degrading user experience.
69
+
70
+
### Parallel execution
58
71
59
72
After the first agent has rewritten and decomposed the user input, we execute each of the individual questions in parallel for the quickest time to generate an answer.
60
73
@@ -189,22 +202,9 @@ Below is a sample entry for a view / table that we which to expose to the LLM. T
189
202
}
190
203
```
191
204
192
-
See `./data_dictionary` for more details on how the data dictionary is structured and ways to **automatically generate it**.
193
-
194
-
## Agentic Vector Based Approach (Iteration 5)
195
-
196
-
This approach builds on the the Vector Based SQL Plugin approach that was previously developed, but adds a agentic approach to the solution.
197
-
198
-
This agentic system contains the following agents:
199
-
200
-
-**Query Cache Agent:** Responsible for checking the cache for previously asked questions.
201
-
-**Query Decomposition Agent:** Responsible for decomposing complex questions, into sub questions that can be answered with SQL.
202
-
-**Schema Selection Agent:** Responsible for extracting key terms from the question and checking the index store for the queries.
203
-
-**SQL Query Generation Agent:** Responsible for using the previously extracted schemas and generated SQL queries to answer the question. This agent can request more schemas if needed. This agent will run the query.
204
-
-**SQL Query Verification Agent:** Responsible for verifying that the SQL query and results question will answer the question.
205
-
-**Answer Generation Agent:** Responsible for taking the database results and generating the final answer for the user.
206
-
207
-
The combination of this agent allows the system to answer complex questions, whilst staying under the token limits when including the database schemas. The query cache ensures that previously asked questions, can be answered quickly to avoid degrading user experience.
205
+
> [!NOTE]
206
+
>
207
+
> - See `./data_dictionary` for more details on how the data dictionary is structured and ways to **automatically generate it**.
0 commit comments